- Pascal's Chatbot Q&As
- Posts
- What happens when a state institution that already harmed citizens through data misuse appears to collect, route, and retain behavioural data from those same citizens again?
What happens when a state institution that already harmed citizens through data misuse appears to collect, route, and retain behavioural data from those same citizens again?
People who were already damaged by the Dutch childcare benefits scandal are allegedly being monitored when they visit the very website created to help repair that damage.
Summary: The report alleges that the Dutch Tax Administration’s recovery website for victims of the childcare benefits scandal sends visitor behaviour, search queries, chatbot input and other sensitive signals to Adobe in the US through opaque tracking infrastructure.
If accurate, this is morally and ethically serious because the same institution that harmed citizens through data misuse may be exposing those same citizens to another hidden data pipeline while pretending to rebuild trust.
The broader lesson for AI accountability is that unfair treatment should be challenged not only at the decision/output level, but across the full data chain: collection, profiling, vendor access, retention, model use, oversight and appeal.
by ChatGPT-5.5
The report The Benefits Scandal Recovery Operation — Powered by Adobe USA is not simply about cookies. It is about what happens when a state institution that already harmed citizens through data misuse appears to collect, route, and retain behavioural data from those same citizens again — this time through a commercial analytics stack connected to Adobe. The moral force of the report comes from that repetition: people who were already damaged by the Dutch childcare benefits scandal are allegedly being monitored when they visit the very website created to help repair that damage.
1. Background: why this matters
For readers unfamiliar with the Dutch context, the toeslagenaffaire — the Dutch childcare benefits scandal — was one of the most serious administrative scandals in modern Dutch governance. Thousands of parents were wrongly treated as fraudsters by the Dutch tax and benefits authorities. Many were pushed into debt, stress, family disruption, legal battles and loss of trust in the state. The Dutch Data Protection Authority later fined the Tax Administration €2.75 million for unlawful processing of nationality data in the childcare-benefits context.
The Dutch government then created a recovery operation — Herstel Toeslagen — to compensate and support affected parents. That matters because the recovery website is not a normal public-service website. It is a trust-repair interface. People visiting it are not ordinary consumers browsing a shop. They may be financially vulnerable, traumatised, legally exposed, angry, confused, desperate, or simply trying to understand whether the same state that harmed them will finally help them.
The Hackedemia report alleges that on herstel.toeslagen.nl, the site for victims of the benefits scandal, page visits, search queries, chatbot inputs, feedback text, errors and other behavioural signals are routed to Adobe infrastructure in the United States through a CNAME setup that makes the traffic appear first-party to the browser. The report says this was verified through HAR captures, DNS checks, JavaScript analysis and a scanner, and it distinguishes between hard evidence, circumstantial evidence and interpretation.
The timing makes the allegation more serious. In December 2023, after Stichting Data Bescherming Nederland brought a mass claim against Adobe, the Belastingdienst reportedly said it had switched off Adobe cookie functionality “uit voorzorg” — as a precaution — until Adobe had clarified the matter. The Hackedemia report says that in 2026, Adobe tracking was not only still active but had a separate configuration for the recovery website.
2. The core concern: this is not “just analytics”
The most important point is that the report reframes analytics as administrative surveillance. That may sound dramatic, but in a state-benefits context it is not exaggerated. Analytics data on a tax website can reveal far more than “website performance.” Search terms, page titles, error messages, chatbot questions and feedback entries can disclose financial distress, family status, legal uncertainty, debt, childcare problems, divorce, disability, nationality-related questions, fraud allegations, or recovery-claim status.
The report’s key technical concern is CNAME cloaking. In plain English: instead of the browser visibly contacting an Adobe tracking domain, a subdomain under the Belastingdienst domain allegedly points to Adobe infrastructure. That can make third-party tracking look like first-party traffic. To a normal citizen, nothing obvious says: “Your behaviour on this government recovery site may be processed through Adobe.”
The second concern is the persistent Adobe Experience Cloud ID. The report says a 38-digit identifier is generated or reused and stored for two years. The problem is not merely the number. The problem is that a long-lived identifier can turn separate interactions into a profile over time: pages visited, searches made, errors triggered, feedback submitted, chatbot questions asked. In privacy law and governance terms, persistence changes the nature of the activity. A visit counter becomes a behavioural dossier.
The third concern is free-text capture. If a visitor types into a chatbot or feedback box, they may reveal exactly the type of sensitive context that a public authority should handle with extreme restraint. A person might write: “I am a victim of the benefits scandal and I cannot manage this process,” or “How do I deal with childcare benefits after divorce?” The report alleges that literal free-text input can be passed into the analytics layer.
The fourth concern is the recovery-site label itself. The report says Adobe calls from the recovery site include a title such as “Website voor gedupeerde ouders | Herstel Toeslagen (UHT)” and a classification connected to childcare-benefits recovery. That means the data does not only show that someone visited a government website. It may show that someone visited a site for people harmed by a specific administrative scandal.
The fifth concern is the contrast with the authenticated environment. The report says that mijn.toeslagen.nl, the logged-in environment, was clean: no Adobe, no Google, no Meta, no AB Tasty. If accurate, this is one of the strongest findings. It suggests that the government knows how to operate a benefits environment without these trackers. The tracking on the public recovery site is therefore not an unavoidable technical necessity. It is an architectural and governance choice.
3. The concerns flagged in the report
The report flags at least twelve major concerns.
First, data from vulnerable citizens may be routed to a US company when they visit a site created specifically for victims of the Dutch benefits scandal.
Second, the implementation allegedly uses CNAME cloaking, reducing transparency for users and potentially bypassing ordinary browser or tracker-blocking signals.
Third, persistent identifiers are allegedly used for up to two years, which is difficult to reconcile with the idea of minimal, anonymous public-service analytics.
Fourth, search queries may be transmitted in full, meaning a visitor’s tax, family, debt, fraud, childcare or recovery concerns could become part of a behavioural record.
Fifth, chatbot inputs and feedback text may be captured literally, creating obvious risks of sensitive personal data leakage.
Sixth, error messages may disclose personal or administrative details, especially if they relate to identity, benefit status, dates of birth, BSN-like identifiers, claim status or mismatched records.
Seventh, the cookie statement may be misleading if it says cookies contain no personal data while persistent identifiers and contextual page/search data are being processed.
Eighth, the public statement from December 2023 may be inconsistent with the 2026 measurements, because the Belastingdienst reportedly said Adobe cookie functionality had been switched off as a precaution.
Ninth, Adobe Audience Manager references are allegedly present, raising concerns about cross-site profiling infrastructure even if the report does not prove that such profiling was actively taking place.
Tenth, AB Tasty is allegedly present for A/B testing, raising a separate public-law concern: should different citizens receive different versions of government information in a benefits context?
Eleventh, there may be missing or non-public DPIAs and processor documentation, including a Data Protection Impact Assessment for a site serving a highly vulnerable population.
Twelfth, the governance optics are disastrous: the same state institution that damaged people through data-driven suspicion appears to be using opaque behavioural tracking on the recovery pathway.
4. Is it immoral?
Yes, if the report’s central findings are accurate, the situation can reasonably be called immoral.
Immorality here does not depend on proving malicious intent. The report itself is careful to say it does not prove bad faith or individual responsibility. The moral failure is structural. A state that has harmed citizens through opaque data practices owes those citizens extraordinary restraint, dignity and transparency. It should not treat a recovery website as an ordinary digital-marketing surface.
The strongest moral objection is the betrayal of vulnerability. People seeking redress from government misconduct are in a different moral category from ordinary website visitors. They are not “traffic.” They are not a conversion funnel. They are people whose relationship with the state has already been broken. Monitoring their recovery journey through a third-party analytics stack, especially one hidden behind technical abstraction, is a failure of empathy and institutional memory.
The second moral objection is repetition without learning. The benefits scandal was not only about wrong decisions. It was about the state’s willingness to let systems, risk models, classifications and bureaucratic logic override human reality. If the recovery process then recreates opaque data extraction, the institution has not absorbed the deeper lesson. It has repaired the form while repeating the pattern.
5. Is it unethical?
Yes. The conduct described is very plausibly unethical, even before a court or regulator decides whether it is illegal.
Public-sector data ethics requires more than technical compliance. It requires necessity, proportionality, transparency, accountability and special care for vulnerable groups. On that standard, the alleged configuration is ethically weak.
The use of CNAME cloaking is particularly problematic. Even when legal, it creates an asymmetry of knowledge. The citizen sees a trusted government domain. The underlying data flow may involve a commercial analytics provider. That undermines meaningful consent, meaningful objection and meaningful public scrutiny.
The alleged free-text capture is ethically worse. A feedback box or chatbot creates a conversational expectation. Citizens may think they are speaking to the government. They are unlikely to understand that their literal words may enter an analytics pipeline. In high-trust domains, this is exactly where data minimisation should be strongest.
The A/B-testing concern is also ethically significant. In a commercial context, A/B testing can be harmless or useful. In a public-benefits context, it can become unequal treatment by design. If two citizens receive different wording about rights, deadlines, eligibility, recovery routes or complaint options, the state must be able to prove that experimentation does not undermine equal access to justice or benefits.
6. Is it illegal?
This is where the answer must be more careful. The report raises serious legal red flags, but illegality is ultimately for regulators and courts to determine.
That said, the alleged facts point to several plausible legal problems.
Under the GDPR, a persistent unique identifier linked to browsing behaviour can qualify as personal data if a person can be identified directly or indirectly. The report argues that the Adobe Experience Cloud ID, combined with page titles, search terms and recovery-site classifications, should not be described as “no personal data.” That argument is strong.
Under ePrivacy and Dutch cookie rules, cookies or similar technologies generally require prior consent unless they are strictly necessary or used only for privacy-friendly analytics. A two-year identifier, detailed custom variables, click coordinates, high-entropy device hints, full search queries and possible cross-site profiling infrastructure look very different from simple anonymous visit counting.
Under GDPR accountability rules, the Belastingdienst would also need to demonstrate a valid legal basis, data minimisation, purpose limitation, storage limitation, appropriate processor arrangements, transparency, and — given the sensitivity and scale — probably a serious DPIA analysis. The Dutch government itself has publicly addressed cookies and online tracking, and the official Government.nl cookie page says Platform Rijksoverheid Online does not use tracking cookies and uses analytics in ways designed not to trace data back to individual visitors.
The Adobe litigation context matters too. The Rotterdam court confirmed in May 2025 that the SDBN case against Adobe is a collective action involving claims that Adobe breaches privacy rules and infringes the privacy of many people, although the court had not reached a final decision and was waiting on EU-law questions relevant to admissibility. So it would be wrong to say “Adobe has been found liable” on the basis of that case. But it is fair to say the legal environment around Adobe tracking was already contested and known.
So the best legal characterisation is this: not proven illegal in this essay, but legally high-risk, regulator-ready, and difficult to defend if the report’s evidence is accurate.
7. The AI connection: why this matters beyond cookies
What should citizens or organisations do if they feel unfairly treated by AI? The report is not primarily about an AI model making a decision. It is about data collection, profiling infrastructure, analytics governance and administrative trust. But that is precisely why it matters for AI.
AI harms rarely begin with “the model.” They begin with data capture, classification, weak consent, hidden vendors, poor documentation, vague purposes, technical opacity and institutional incentives to optimise systems rather than listen to people. By the time an AI system denies a benefit, flags a person as risky, ranks a complaint as low priority, or generates a misleading explanation, the underlying harm may already be buried in logs, identifiers, data flows, vendor contracts and model features.
The lesson is therefore broader: when citizens are treated unfairly by AI, they should not only challenge the output. They should challenge the data supply chain.
Who collected the data? What identifiers were used? Which vendors processed it? Was it used for analytics, segmentation, fraud detection, model training, risk scoring or personalisation? Was there a DPIA? Was there human review? Were vulnerable groups tested for disparate impact? Were people told clearly? Could they object? Were records retained? Could the system be audited?
This is the future of AI accountability. Not “the chatbot was wrong,” but “show me the chain of data, decisions, models, vendors, purposes and safeguards that produced this outcome.”
8. Recommendations for citizens unfairly treated by AI
Citizens anywhere in the world should become more evidence-driven when challenging AI or automated systems.
First, preserve the interaction. Save screenshots, URLs, timestamps, emails, letters, chatbot transcripts, account notices, decision letters and app messages. If possible, record the exact steps that led to the result. Automated systems are often changed later; evidence disappears quickly.
Second, ask whether automation was used. Do not accept vague wording such as “system issue,” “risk assessment,” “policy rules,” or “security check.” Ask whether an algorithm, AI model, automated decision system, scoring tool, fraud model, recommender system, chatbot or third-party analytics provider influenced the result.
Third, request your data. In GDPR countries, use a data subject access request. Ask for personal data, derived data, scores, categories, logs, identifiers, risk flags, model inputs, decision rules, recipients, processors, retention periods and sources of the data. Outside GDPR jurisdictions, look for equivalent privacy, consumer, credit, employment, education, healthcare or public-records rights.
Fourth, ask for human review. A common institutional trick is to say humans are “in the loop” when in practice they rubber-stamp system outputs. Ask who reviewed the case, what discretion they had, what evidence they considered, and whether they could override the system.
Fifth, challenge accuracy and relevance. AI and analytics systems often use stale, incomplete, inferred or proxy data. Say specifically: “This data is inaccurate,” “This inference is unsupported,” “This factor is irrelevant,” or “This classification is discriminatory or disproportionate.”
Sixth, look for group harm. If many people with similar characteristics or situations are affected, the issue may not be an individual mistake. It may be systemic. Collective action, regulator complaints, NGO involvement or media scrutiny may be more effective than isolated appeals.
Seventh, file complaints in the right places. Depending on the country and sector, that may mean a data protection authority, consumer protection agency, equality body, ombudsman, financial regulator, labour regulator, education regulator, healthcare regulator, court, parliamentary representative, or public-sector inspectorate.
Eighth, do not let institutions hide behind vendors. If a bank, school, employer, hospital, platform or government agency used a vendor system, the deploying organisation is usually still responsible for the decision and the consequences. “The vendor did it” is not accountability.
9. Recommendations for organisations unfairly treated by AI
Organisations can also be harmed by AI: wrongly delisted sellers, false fraud flags, account suspensions, procurement exclusions, insurance pricing, reputational scoring, platform demotion, automated copyright accusations, sanctions-screening errors or AI-generated misinformation.
The first recommendation is to treat the AI harm as an evidence case, not a customer-service dispute. Build a timeline. Preserve notices. Capture API responses. Record account status changes. Save metadata. Track revenue impact. Keep copies of policy pages before they change.
Second, demand the decision basis. Ask what rule, score, model, policy or dataset triggered the action. Ask whether the decision was fully automated, partly automated or human-reviewed. Ask which appeal path allows meaningful review.
Third, separate the legal theories. AI unfairness may involve data protection, contract breach, consumer law, competition law, employment law, defamation, discrimination, sector regulation, administrative law, procurement law or platform-governance obligations. Do not frame everything as “AI bias.” Frame it as the specific harm the law recognises.
Fourth, look for audit hooks. Many organisations lose because they cannot show exactly what happened. Preserve logs, timestamps, user IDs, model outputs, prompts, screenshots, transaction IDs and communications. If you later need a regulator, court or journalist, a clean evidence package matters.
Fifth, escalate collectively where possible. Platforms and public bodies often ignore one complaint but respond to patterns. Trade associations, professional bodies, civil-society groups and class-action mechanisms can turn isolated unfairness into systemic accountability.
10. Recommendations for governments and public bodies
The most important recommendation for public bodies is simple: do not use commercial behavioural tracking on vulnerable public-service pathways unless you can publicly defend every data point collected.
Public bodies should adopt a default rule: no hidden third-party tracking, no CNAME cloaking, no behavioural profiling, no A/B testing of rights-related information, no free-text analytics capture, no unnecessary cross-border transfers, no long-lived identifiers, and no vague cookie statements.
For any AI or analytics system touching citizens’ rights, benefits, taxes, immigration, policing, healthcare, education or employment, governments should publish:
the purpose of the system;
the data collected;
the vendors involved;
the legal basis;
the DPIA or algorithmic impact assessment;
the retention period;
the human-review process;
the appeal route;
the bias and error testing;
the kill-switch or suspension process.
The deeper lesson is that trust repair requires data restraint. A government cannot rebuild trust with citizens while quietly expanding the technical surface through which those citizens are monitored.
11. Conclusion: the scandal after the scandal
The report matters because it shows how institutional harm can reproduce itself in quieter technical form. The original childcare benefits scandal was about wrongful suspicion, administrative rigidity and data-driven injustice. The alleged Adobe configuration is not the same kind of harm, but it belongs to the same family of failure: citizens become data subjects before they are treated as human beings.
Calling the situation immoral is justified if the findings are accurate. Calling it unethical is even easier. Calling it illegal requires regulatory or judicial determination, but the legal red flags are serious enough that any responsible institution would suspend the practice, disclose the architecture, publish the DPIA, correct the cookie statement, and explain who authorised the setup.
The global lesson for AI is stark: unfair automated treatment is rarely just a bad output. It is usually the visible symptom of a hidden data pipeline. Citizens and organisations should therefore challenge not only the AI decision, but the entire chain behind it: collection, classification, profiling, vendor access, retention, model use, human oversight and appeal.
The question for the AI age is not only whether machines make fair decisions. It is whether institutions can prove that the systems around those machines are worthy of trust.
·
14 SEPTEMBER 2025

Essay: A Strategic Blunder in Digital Sovereignty — Dutch Confidential Data in Foreign Hands
·
7 SEPTEMBER 2025

The Ivory Tower and the Iron Sword: An Investigation into Dutch Academia's Technological Contributions to Israel's War in Gaza
·
25 APR

Summary: The proposed US takeover of Solvinity, which manages DigiD and MijnOverheid, raises serious concerns about Dutch digital sovereignty, data access, and continuity of critical public services.
·
25 JANUARY 2025

Question for ChatGPT-4o: Please read "Compensation for victims of benefits scandal stalls and threatens to last another twenty years: 'Impracticable tangle'" So does this mean that when AI takes your money, you may have to wait 20 years to get it back?
·
9 SEPTEMBER 2023

Question 1 of 7 for ChatGPT-4: Please read https://storage.googleapis.com/pieter-omtzigt-website/Pieter-Omtzigt-Binnenhoflezing-06-09-202363.pdf and let me know what is being said
·
26 MAY 2023

Question 1 of 7 for Bing Chat: Please read the paper "ChatGPT and Generative AI Systems as Corporate Ethics Advisors" by Rupert Macey-Dare. What is the paper about and can you summarise it?
·
2 NOVEMBER 2024

Question for AI services: Please read the article “Confidential report shows chaos and cynicism at government commission for compensation for benefits affair” and the article “Dutch scandal serves as a warning for Europe over risks of using algorithms
