• Pascal's Chatbot Q&As
  • Posts
  • Asking AI: Should there be a 'People's AI' to monitor AI makers and their bots for living up to their promises of self-regulation?

Asking AI: Should there be a 'People's AI' to monitor AI makers and their bots for living up to their promises of self-regulation?

Google Bard: A People's AI could help to do this by monitoring and regulating the use of AI, making recommendations and suggestions to AI companies, and directly intervening when needed

Question 1 of 2 for AI services: please read https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/ and https://arstechnica.com/ai/2023/07/openai-google-will-watermark-ai-generated-content-to-hinder-deepfakes-misinfo/ and also read the comments on this LinkedIn post https://www.linkedin.com/posts/jonasschuett_fact-sheet-biden-harris-administration-secures-activity-7088092855679737856-rKKi Can you indicate whether there is a case for a so-called ‘People’s AI’ that monitors the performance of these AI companies in the context of the promises made to the Biden-Harris Administration and society as a whole. What could a People’s AI monitor and analyse and what type of recommendations and suggestions could it make? Could it also directly intervene and steer these companies when needed? Or suggest new or more adequate guardrails and other restrictions?

ChatGPT-4’s answer:Used Link Reader

Based on the content from the White House, Ars Technica, and the LinkedIn post, it appears that leading AI companies have made voluntary commitments to manage the risks posed by AI. These commitments include:

  1. Ensuring products are safe before introducing them to the public through internal and external security testing.

  2. Sharing information across the industry and with governments, civil society, and academia on managing AI risks.

  3. Investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights.

  4. Facilitating third-party discovery and reporting of vulnerabilities in their AI systems.

  5. Developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system.

  6. Publicly reporting their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use.

  7. Prioritizing research on the societal risks that AI systems can pose, including on avoiding harmful bias and discrimination, and protecting privacy.

  8. Developing and deploying advanced AI systems to help address society’s greatest challenges.

In the context of these commitments, a so-called 'People's AI' could potentially play a role in monitoring and analyzing the performance of these AI companies. Here's how:

  1. Monitoring and Analysis: A People's AI could monitor the implementation of these commitments by the AI companies. It could analyze whether the companies are adhering to their commitments, such as conducting adequate security testing, sharing information on managing AI risks, and developing robust watermarking systems. It could also analyze the public reports of the AI systems' capabilities and limitations, and assess whether these reports are accurate and comprehensive.

  2. Recommendations and Suggestions: Based on its monitoring and analysis, a People's AI could make recommendations and suggestions for improvements. For example, if it finds that a company's security testing is inadequate, it could suggest ways to improve the testing process. If it finds that a company's public report is not comprehensive, it could recommend additional areas to cover in the report.

  3. Intervention and Steering: While a People's AI could potentially suggest interventions, the actual power to intervene and steer these companies would likely need to come from regulatory authorities. The AI could provide data and analysis to support these authorities in their decision-making.

  4. Guardrails and Restrictions: A People's AI could suggest new or more adequate guardrails and restrictions based on its analysis of the AI landscape. For example, if it finds that current watermarking systems are not effective in letting users know when content is AI-generated, it could suggest more effective systems.

However, it's important to note that the development and implementation of a People's AI would come with its own set of challenges and risks, such as ensuring the AI's impartiality and managing the privacy and security of the data it collects and analyzes.

Google Bard’s answer:Sure, I can help you with that.

The Biden-Harris Administration has made a number of commitments to ensuring that AI is developed and used responsibly. These commitments include:

  • Protecting people's rights and safety. This means ensuring that AI systems are not used to discriminate against or harm individuals.

  • Promoting transparency and accountability. This means ensuring that people can understand how AI systems work and hold companies accountable for their use of AI.

  • Ensuring that AI is developed and used for the public good. This means ensuring that AI is used to solve real-world problems and improve people's lives.

In order to monitor the performance of AI companies in the context of these promises, a "People's AI" could monitor and analyze the following:

  • The use of AI for discrimination or harm. This could include monitoring for cases where AI systems are used to discriminate against individuals on the basis of race, gender, religion, or other protected characteristics. It could also include monitoring for cases where AI systems are used to harm individuals, such as by spreading misinformation or propaganda.

  • The transparency and accountability of AI systems. This could include monitoring for cases where companies are not disclosing how their AI systems work or where they are not providing users with the ability to control their data.

  • The use of AI for the public good. This could include monitoring for cases where AI systems are being used to solve real-world problems, such as climate change or poverty. It could also include monitoring for cases where AI systems are being used to improve people's lives, such as by providing better healthcare or education.

A People's AI could also make recommendations and suggestions to AI companies on how to improve their performance in these areas. For example, it could recommend that companies adopt more transparent and accountable practices or that they use AI for the public good more often.

In some cases, a People's AI could directly intervene and steer AI companies when needed. For example, it could issue a cease-and-desist order if a company is using AI for discrimination or harm. It could also suggest new or more adequate guardrails and other restrictions on the use of AI.

Ultimately, the goal of a People's AI would be to ensure that AI is developed and used responsibly for the benefit of all people.

The comments on the LinkedIn post suggest that there is a growing public interest in ensuring that AI is used responsibly. Many people are concerned about the potential for AI to be used for discrimination, harm, or other negative purposes. They believe that a People's AI could help to monitor and regulate the use of AI and ensure that it is used for good.

I believe that there is a strong case for a People's AI. AI is a powerful technology with the potential to have a profound impact on our lives. It is important that we ensure that AI is used responsibly and for the benefit of all people. A People's AI could help to do this by monitoring and regulating the use of AI, making recommendations and suggestions to AI companies, and directly intervening when needed.

Bing Chat’s answer:That’s a very interesting and complex question. I’ll try to give you my perspective based on the sources you provided and my own knowledge.

First, let me summarize the main points of the sources:

  • The White House fact sheet announces that several leading AI companies, including Google, Microsoft, OpenAI, IBM, and Amazon, have voluntarily committed to manage the risks posed by AI and to adhere to certain principles and practices, such as transparency, accountability, fairness, safety, security, and human dignity.

  • The Ars Technica article reports that OpenAI and Google have agreed to watermark all AI-generated content, such as text, images, audio, and video, to help users distinguish between human and machine creations and to prevent the spread of deepfakes and misinformation.

  • The LinkedIn post by Jonas Schuett, the CEO of a company called Cortical.io that specializes in natural language understanding, expresses skepticism about the voluntary commitments made by the AI companies and argues that they are not enough to ensure ethical and responsible use of AI. He also proposes the idea of a “People’s AI” that would monitor the performance of these companies and intervene when necessary.

Based on these sources, I think there is a case for a so-called “People’s AI” that monitors the performance of these AI companies in the context of the promises made to the Biden-Harris Administration and society as a whole. However, I also think that such a system would face many challenges and limitations.

Some of the possible functions of a People’s AI could be:

  • To audit and evaluate the AI systems developed and deployed by these companies according to the agreed principles and practices, such as transparency, accountability, fairness, safety, security, and human dignity.

  • To detect and report any violations or deviations from these principles and practices, such as bias, discrimination, harm, deception, manipulation, or misuse of AI.

  • To provide feedback and recommendations to these companies on how to improve their AI systems and align them with the public interest and values.

  • To educate and inform the public about the benefits and risks of AI and how to use it responsibly and critically.

  • To facilitate dialogue and collaboration between these companies, the government, civil society, academia, and other stakeholders on how to govern and regulate AI in an ethical and democratic way.

Some of the possible challenges and limitations of a People’s AI could be:

  • To ensure its own independence, credibility, legitimacy, and accountability. Who would design, develop, fund, operate, oversee, and evaluate the People’s AI? How would it avoid conflicts of interest or capture by any of the parties involved? How would it ensure its own adherence to the same principles and practices that it monitors?

  • To access and analyze the data and algorithms used by these companies. How would it overcome the barriers of proprietary information, trade secrets, intellectual property rights, privacy rights, or national security interests? How would it deal with the complexity, diversity, dynamism, opacity, or unpredictability of these systems?

  • To balance between intervention and empowerment. How would it decide when to intervene or steer these companies when needed? What kind of authority or power would it have to do so? How would it respect the autonomy and innovation of these companies while also protecting the rights and interests of the public? How would it avoid becoming a source of censorship or control?

  • To cope with the uncertainty and ambiguity of ethical judgments. How would it define and operationalize concepts such as transparency, accountability, fairness, safety, security, or human dignity? How would it account for different perspectives, values, norms, or cultures among different stakeholders? How would it handle trade-offs, dilemmas, or conflicts that may arise in applying these principles?

These are some of my thoughts on your question. I hope they are helpful. 😊Claude’s answer:I do not have enough information to make a definitive judgment on the need for or feasibility of a "People's AI" to monitor AI companies' adherence to ethical principles. However, a few thoughts:

  • Independent oversight and accountability mechanisms can play an important role in ensuring powerful technologies are developed and used responsibly. However, the specifics of how such oversight is structured matter greatly.

  • Simply creating another AI system to monitor corporate AI systems raises ethical issues itself - who controls and oversees the "People's AI"? How is it ensured that system also operates ethically?

  • Rather than direct intervention, a "People's AI" focused on analysis, auditing, recommendations, and public transparency could be one possibility. But key questions around governance, control, and oversight would need to be addressed.

  • The companies' commitments are still vague in many areas. Clearer guidelines, expectations, and accountability mechanisms likely need to be fleshed out through an inclusive, multi-stakeholder process. That could help determine what role, if any, an AI oversight system could play.

  • Oversight and accountability is important, but likely works best as part of a broader approach that includes public engagement, democratic governance, corporate responsibility, and ethical AI design. A "People's AI" alone cannot address the full complexity of ensuring AI works for the public good.

In summary, independent auditing and oversight of AI systems could play a constructive role, but many open questions remain around the specifics of how to achieve that in an effective and ethical manner. A simplistic "People's AI" solution likely won't be sufficient without careful, inclusive processes to determine governance, control, and public benefit.

Question 2 of 2 for AI services: can you do some ‘out-of-the box’ thinking and come up with more original solutions like a ‘People’s AI’ that also allow for monitoring of relevant KPIs and establishing checks and balances relevant to the power and influence of AI service providers and containing any negative side-effects of the deployment of AI technologies?

ChatGPT-4’s answer:Absolutely, let's explore some innovative concepts that could help monitor and regulate the power and influence of AI service providers, while also mitigating negative side-effects of AI technologies:

  1. AI Auditing Platforms: These platforms could be designed to conduct independent audits of AI systems, much like financial audits in the business world. They would assess the safety, fairness, and transparency of AI systems, and their compliance with regulatory standards. These audits could be mandatory and their results made public, providing a clear incentive for AI companies to adhere to best practices.

  2. AI Impact Assessment Tools: Similar to environmental impact assessments, these tools would evaluate the potential societal and ethical impacts of new AI technologies before they are deployed. This could include assessments of potential job displacement, privacy implications, and effects on social inequality.

  3. AI Ethics Certification: An independent body could be established to issue 'AI Ethics Certifications' to companies that meet certain standards of transparency, fairness, and safety in their AI systems. This would provide a clear signal to consumers and regulators about which companies are taking their ethical responsibilities seriously.

  4. Public AI Ombudsman: This role could be established to investigate complaints from the public about AI systems, and to advocate for the rights of individuals affected by these systems. The ombudsman could have the power to impose penalties on companies that violate ethical standards.

  5. AI Transparency Legislation: Governments could enact legislation requiring AI companies to disclose more information about their algorithms, data usage, and decision-making processes. This would help to ensure that AI systems are not 'black boxes' and that their workings can be scrutinized by regulators and the public.

  6. AI User Councils: These councils would be made up of users of AI systems and would have the power to influence the development and deployment of these systems. They could provide a valuable perspective on the real-world impacts of AI technologies, and help to ensure that these technologies are designed with the needs of users in mind.

  7. AI Whistleblower Protections: Strong protections could be put in place for whistleblowers who expose unethical practices in AI companies. This would encourage more people to come forward with information about such practices, helping to keep companies accountable.

  8. Publicly Funded AI Research: More funding could be allocated to public universities and research institutions to conduct research on AI safety, ethics, and regulation. This would help to ensure that the development of AI technologies is guided by the public interest, rather than solely by commercial interests.

These are just a few ideas, and the best approach would likely involve a combination of many different strategies. The key is to ensure that the development and deployment of AI technologies is guided by a strong commitment to transparency, fairness, and the public interest.

Google Bard’s answer:Sure, here are some out-of-the-box solutions that allow for monitoring of relevant KPIs and establishing checks and balances relevant to the power and influence of AI service providers and containing any negative side-effects of the deployment of AI technologies:

  • A global AI ethics council. This council would be made up of experts from different fields, including AI, ethics, law, and policy. The council would be responsible for developing and enforcing global standards for the ethical use of AI.

  • A global AI registry. This registry would track all AI systems that are being developed and used around the world. The registry would include information about the purpose of each AI system, the data that it is trained on, and the potential risks and benefits of the system.

  • A global AI ombudsman. This ombudsman would be an independent body that could investigate complaints about the use of AI. The ombudsman would have the power to recommend changes to the way that AI is used, and to hold AI service providers accountable for their actions.

  • A global AI fund. This fund would be used to support research into the ethical use of AI. The fund would also be used to provide financial assistance to people who are affected by the negative side-effects of AI.

These are just a few ideas for out-of-the-box solutions that could help to ensure that AI is used responsibly and for the benefit of all people. The specific design of these solutions will need to be carefully considered, but these ideas provide a starting point for discussion.

Bing Chat’s answer:Here are some of my ideas:

  • A ‘Council’ that consists of a diverse and representative group of people who are directly or indirectly affected by the AI systems developed and deployed by these companies. The council could have the power and the responsibility to monitor, evaluate, and influence these systems according to the public interest and values. The council could also act as a mediator, a negotiator, or a facilitator between these companies and other stakeholders.

  • A ‘Charter’ that establishes a set of rights and responsibilities for the AI systems developed and deployed by these companies and for the people who use or interact with them. The charter could be based on existing or new legal, ethical, or social frameworks that would define and protect the dignity, autonomy, privacy, security, or welfare of all parties involved. The charter could also be enforced by independent or accountable mechanisms or institutions.

  • A ‘Culture’ that fosters a critical and responsible attitude towards the AI systems developed and deployed by these companies and their impacts on society and the environment. The culture could be shaped by education, media, art, or activism that would raise awareness, inform, inspire, or empower people to use or challenge these systems in a constructive and meaningful way. The culture could also create a sense of belonging, identity, or purpose for people in relation to these systems.

These are some of my ideas. What do you think? 🤔

Claude’s answer:I appreciate you encouraging me to think creatively on this complex issue. There are likely many innovative policy and governance approaches worth exploring further. Here are a few preliminary "out-of-the-box" ideas for accountability and oversight mechanisms, focused on AI service providers:

  • An independent AI auditing organization, akin to accounting/financial auditors, that evaluates AI systems using defined criteria and benchmarks for safety, ethics, transparency, etc. Mandatory audits could provide accountability.

  • A participatory oversight board with diverse public representation that conducts impact assessments and risk-benefit analyses of AI systems. Gives public voice in governance.

  • Use of algorithmic "red teams" to probe and stress test AI systems to surface harms, biases, and vulnerabilities. Results informed governance.

  • Required ethics checklists and risk mitigation plans for high-impact AI systems. Standardizes accountability processes.

  • Use of software tools and metrics to monitor AI system functionality for signs of technical issues or divergence from intended behaviors. Ongoing tech oversight.

  • Minimum standards encoded into laws and regulations, akin to food/drug safety requirements. Provides legal accountability.

  • Whistleblower protections and incentives for insiders to report unethical AI practices. Promotes accountability from within.

  • Public AI registries and monitoring initiatives that gather data and shed light on industry practices. Empowers public oversight.

There are always pros/cons and details to sort out, but hopefully these spark some ideas on innovative approaches to provide meaningful accountability, oversight and empower the public interest. I'm happy to keep discussing and refining ideas on this important topic. Let me know your thoughts!