Asking AI: Why are technical AI proponents belligerent?

GPT-4: Understanding these factors can help foster more constructive discussions about AI. Claude: Passion for their work can make them defensive against criticism

Question 1 of 3 for AI services: I have noticed that during conversations about AI, proponents with a technical background can be quite belligerent: rather than carefully reviewing the pros and cons of the technology, or basing views on existing case studies, events and research, arguments are being used such as: AI will be here to stay, you cannot stop technological revolutions, don’t fear technology, don’t be afraid of risks, forget about regulation ‘just do’, the common good is worth a couple of sacrifices. Why are (technical) AI proponents so belligerent and perhaps less scientific than they could or should be? Is this wishful thinking, lack of true knowledge, are they in denial and avoiding the obvious? Please list all factors you can think of.

ChatGPT-4’s answer:The tone and stance of proponents of AI, especially those with a technical background, may come across as belligerent for various reasons. It's important to remember that not all AI experts or enthusiasts exhibit this behavior; however, for those that do, several factors might contribute to such an attitude:

  1. Techno-optimism: Many technologists have a strong belief in progress and the power of technology to improve the world. This optimism can sometimes overshadow a balanced assessment of potential risks and downsides.

  2. Financial Interests: Some advocates may have financial interests tied to the success of AI technologies, which can lead to a bias towards promoting AI aggressively and downplaying potential issues.

  3. Echo Chambers: AI proponents often interact within communities that share their enthusiasm and optimism about technology. This can sometimes create echo chambers that reinforce their beliefs and dismiss dissenting opinions.

  4. Desensitization to Risk: People who work with AI daily may become desensitized to its risks, much like how surgeons may become desensitized to the risks of surgery.

  5. Competitive Pressure: There is a sense of urgency driven by global competition in AI development. The fear of being left behind can prompt a dismissive attitude towards caution and regulation.

  6. Scientific Culture: There is a cultural aspect within the scientific and technical communities that values bold exploration and pushing boundaries, which can sometimes be at odds with calls for caution and regulation.

  7. Misunderstanding of Risk: Some may genuinely believe that fears about AI are overblown due to a misunderstanding of what AI risks entail or because of a lack of engagement with the broader societal implications.

  8. Frustration with Misinformation: Technical experts may be frustrated with what they perceive as misinformation or a lack of understanding about AI among the public or policymakers, leading to a more defensive stance.

  9. Faith in Self-regulation: Some in the tech industry believe that the market or the community of developers can regulate themselves without the need for external control.

  10. Lack of Interdisciplinary Dialogue: AI proponents with a purely technical background may not have had much interaction with ethicists, social scientists, or those from other disciplines who might offer a more nuanced perspective on AI's impact.

  11. Wishful Thinking: There might be a tendency to focus on the ideal outcomes of AI deployment without fully accounting for potential negative scenarios.

  12. Cognitive Biases: Confirmation bias and survivorship bias can lead individuals to pay attention only to successful AI applications and overlook failures and negative outcomes.

  13. Ingroup Bias: The natural human tendency to align with one's perceived group (in this case, AI proponents) can lead to a more aggressive defense of the group's shared interests and viewpoints.

  14. Sensationalism: Sometimes the belligerent stance is a reaction to sensationalist anti-AI arguments, which can prompt an equally extreme counter-response.

  15. Misalignment of Incentives: The incentives in the tech industry often prioritize innovation and speed to market over cautious reflection, influencing the attitudes of those working within it.

  16. Legacy and Pride: For some, AI is not just a professional endeavor but also a legacy they wish to leave behind. This can make them defensive of the technology they have worked so hard to create.

Understanding these factors can help foster more constructive discussions about AI by recognizing the underlying motivations and addressing the concerns of both proponents and skeptics. It's also essential for the broader community to work towards a balanced perspective that considers the complex trade-offs involved in the development and deployment of AI technologies.

Google Bard’s answer:There are a number of reasons why some technical AI proponents may be belligerent in their defense of the technology.

Wishful thinking: Some proponents may be so excited about the potential of AI that they are willing to overlook the potential risks. They may believe that AI is inherently good and will ultimately solve all of our problems. This wishful thinking can lead them to dismiss concerns about AI safety and ethics.

Lack of true knowledge: Some proponents may not have a deep understanding of the potential risks of AI. They may be overconfident in the ability of AI researchers to solve all of the challenges associated with developing safe and reliable AI systems. This lack of knowledge can lead them to make unrealistic claims about the safety and benefits of AI.

Denial and avoidance: Some proponents may be aware of the potential risks of AI, but they may be in denial or avoid thinking about them. They may be afraid of what will happen if we don't develop AI as quickly as possible, or they may simply not want to believe that a technology that they are so passionate about could be dangerous. This denial and avoidance can lead them to dismiss concerns about AI safety as unfounded or alarmist.

Other factors: Other factors that may contribute to the belligerent behavior of some technical AI proponents include:

  • Confirmation bias: Proponents may be more likely to seek out and pay attention to information that confirms their existing beliefs about the safety and benefits of AI. This can lead them to ignore or downplay evidence of the potential risks of AI.

  • Groupthink: Proponents may be surrounded by other AI enthusiasts who share their beliefs. This can create an environment where dissenting opinions are discouraged and groupthink takes hold. This can lead to a more extreme and belligerent defense of AI.

  • Financial interests: Some proponents may have a financial stake in the development and deployment of AI. This can lead them to be more defensive of the technology and to dismiss concerns about its risks.

It is important to note that not all technical AI proponents are belligerent. Many are careful and thoughtful in their assessments of the technology. However, the belligerent proponents tend to be the loudest and most visible, which can give the impression that the entire field of AI is full of reckless zealots.

It is also important to note that the belligerent proponents are not always wrong. Sometimes, they are simply passionate about AI and believe that it is important to develop the technology as quickly as possible. However, their belligerent behavior can be alienating and counterproductive. It is important to have a respectful dialogue with all stakeholders, including AI proponents, so that we can develop AI safely and responsibly.

Bing Chat’s answer:Thank you for your question. I think there are many possible factors that could explain why some AI proponents with a technical background can be belligerent in their arguments. Here are some of them:

  • Confirmation bias: This is the tendency to seek, interpret, and remember information that confirms one’s existing beliefs or hypotheses1. AI proponents may be more likely to focus on the positive aspects of AI and ignore or downplay the negative ones, because they have invested a lot of time, effort, and resources in developing and promoting AI. They may also be more resistant to changing their views when confronted with evidence or arguments that challenge them.

  • Overconfidence: This is the tendency to overestimate one’s own abilities, knowledge, or skills2. AI proponents may be overconfident in their technical expertise and assume that they know more than others about the potential and limitations of AI. They may also be overconfident in their ability to control or predict the outcomes of AI systems, especially when they are complex, uncertain, or adaptive. They may underestimate the risks or difficulties involved in creating and deploying AI safely and ethically.

  • Groupthink: This is the tendency to conform to the opinions or decisions of a cohesive group, especially when there is pressure to reach consensus or avoid conflict3. AI proponents may be influenced by the views and attitudes of their peers, colleagues, or mentors in the AI community, who may share similar goals, values, or interests. They may be reluctant to express dissenting opinions or question the assumptions or methods of their group. They may also be less exposed to or receptive to alternative perspectives or feedback from outside their group.

  • Motivated reasoning: This is the tendency to use reasoning to justify one’s desired conclusions or actions, rather than to seek the truth or accuracy. AI proponents may have various motives or incentives that drive them to advocate for AI, such as personal interest, professional ambition, financial gain, social recognition, or ideological commitment. They may use selective evidence, logical fallacies, rhetorical devices, or emotional appeals to support their claims or persuade others. They may also rationalize or dismiss any objections or criticisms that threaten their motives or goals.

These are some of the possible factors that could contribute to the belligerence of some AI proponents with a technical background. However, I do not mean to imply that all AI proponents are belligerent or that belligerence is exclusive to them. There may be other factors that I have not mentioned, and there may be other groups or individuals who exhibit belligerence in different contexts or domains. I hope this answer helps you understand some of the psychological and social influences that can affect human reasoning and communication.

Claude’s answer:There are a few potential reasons why some AI proponents may come across as belligerent or unscientific in discussions about the technology:

  • Over-enthusiasm and optimism bias: As developers and researchers closely involved in creating AI systems, they may have an inherent optimism bias and get carried away envisioning the potential benefits, while downplaying or dismissing potential risks and limitations.

  • Lack of interdisciplinary expertise: Those with purely technical backgrounds may lack training in ethics, policy, regulation, philosophy, etc. This could lead to overlooking important considerations beyond the technology itself.

  • Values dissonance: There can be clashes between the values of techno-optimists (e.g. efficiency, disruption, technological solutionism) versus techno-skeptics (e.g. precaution, regulation, non-technological solutions). This may come across as belligerence.

  • Financial and career incentives: Researchers and companies have strong incentives to promote the technology they are developing and dismiss concerns that may hinder funding or adoption.

  • Defensiveness: If concerns are raised about AI risks, some may become defensive as they perceive this as an attack on their life's work.

  • Lack of precedents: The disruptive potential of AI is unprecedented, so even experts are speculating about future impacts. Lack of clear precedents may lead to downplaying risks.

  • Blind faith in technology: A quasi-religious belief that technological progress is inevitable and that any problems that arise can be solved with more technology.

So in summary, over-optimism, disciplinary biases, value differences, financial incentives, defensiveness, unpredictability, and blind faith could all contribute to an uncritical and belligerent stance on AI from some technologists. A more interdisciplinary and nuanced view may be needed.

Question 2 of 3 for AI services: Please list all the pros and cons to this behaviour when wanting to have factual and meaningful debates about the risks and benefits of artificial intelligence

ChatGPT-4’s answer:The behavior of AI proponents, when characterized by assertiveness or even belligerence, can have both positive and negative impacts on debates about the risks and benefits of artificial intelligence. Here is a list of potential pros and cons of such behavior:

Pros:

  1. Advocacy for Progress: Their fervor can drive the development and adoption of AI technologies, ensuring that potential benefits are pursued actively.

  2. Countering Fear: Strong advocacy can help counteract irrational fears and misinformation about AI, promoting a more evidence-based understanding.

  3. Passionate Engagement: Their passion can encourage more people to engage with the subject of AI, expanding the conversation to a wider audience.

  4. Defense of Innovation: They can effectively defend the need for innovation against overcautious regulation that might stifle beneficial technological advancements.

  5. Visionary Perspectives: Such proponents can provide a visionary perspective that challenges conventional thinking and opens up new possibilities for the application of AI.

  6. Highlighting Success: By championing AI's achievements, they can help the public and policymakers understand the positive impact AI has already made.

Cons:

  1. Overshadowing Risks: This attitude can lead to the underestimation of real risks and ethical concerns associated with AI, as the focus is often on promoting the technology's benefits.

  2. Polarization: Belligerent advocacy can polarize the debate, making it harder for different sides to come to a consensus on how to proceed with AI development.

  3. Undermining Credibility: If proponents appear too dismissive of legitimate concerns, it can undermine their credibility and the public’s trust in AI technologies.

  4. Hindering Regulation: The aggressive push against regulation can lead to insufficient oversight, potentially allowing harmful AI applications to proliferate.

  5. Echo Chamber Creation: Strong advocacy can contribute to the formation of echo chambers, where only like-minded opinions are heard and external insights are ignored.

  6. Neglect of Interdisciplinary Insights: By not engaging seriously with the concerns of ethicists, social scientists, and others, there's a risk of overlooking important social and ethical implications of AI.

  7. Inhibiting Balanced Discourse: A combative approach can inhibit open, balanced discourse, which is essential for understanding AI's multifaceted impacts.

  8. Drowning Out Minority Voices: Aggressive advocacy can drown out minority voices and concerns, particularly those from groups that may be negatively impacted by AI.

For meaningful and factual debates on AI, it’s crucial for all parties involved to strive for balanced and respectful discourse, acknowledging both the potential and the limitations of artificial intelligence. This requires proponents to engage with critics constructively and for all parties to be willing to update their beliefs in light of new evidence or arguments.

Google Bard’s answer:Here are some of the pros and cons of belligerent behavior in factual and meaningful debates about the risks and benefits of artificial intelligence:

Pros:

  • It can be effective in grabbing attention and making a point.

  • It can be used to challenge the status quo and force people to reconsider their beliefs.

  • It can be used to energize supporters and motivate them to take action.

Cons:

  • It can be alienating and counterproductive.

  • It can make it difficult to have a respectful and productive dialogue.

  • It can lead to the spread of misinformation and disinformation.

  • It can undermine the credibility of the person or organization engaging in the behavior.

In the context of debates about AI, belligerent behavior is particularly harmful because it can make it difficult to have a rational and informed discussion about the risks and benefits of the technology. Instead of carefully considering the evidence, people may be more likely to simply take sides based on their emotions or their loyalty to a particular individual or group. This can lead to poor decision-making and can hinder the development of safe and responsible AI systems.

It is important to note that there is a difference between being passionate about AI and being belligerent. It is perfectly possible to have a strong opinion about AI and to advocate for its development, while still being respectful of others and engaging in civil discourse.

Here are some tips for having a factual and meaningful debate about AI:

  • Be respectful of others, even if you disagree with them.

  • Base your arguments on evidence and logic, not emotion.

  • Be open to changing your mind if presented with new information.

  • Be willing to admit when you are wrong.

  • Focus on the common goal of developing AI safely and responsibly.

By following these tips, we can create a more productive and informative environment for discussing the future of AI.

Bing Chat’s answer:Some of the pros and cons of this behaviour when wanting to have factual and meaningful debates about the risks and benefits of artificial intelligence are:

  • Pros:

    • It may show passion and enthusiasm for AI and its potential to improve various aspects of human life and society.

    • It may challenge the status quo and stimulate innovation and creativity in developing and applying AI solutions.

    • It may inspire others to learn more about AI and its possibilities and limitations, and to engage in constructive dialogue and collaboration.

  • Cons:

    • It may alienate or antagonize those who have different views or concerns about AI and its implications, and create polarization or conflict.

    • It may undermine the credibility and trustworthiness of AI proponents and their arguments, and invite scrutiny or criticism from others.

    • It may neglect or ignore the ethical, social, legal, or environmental issues or risks that may arise from AI development or deployment, and compromise the safety, fairness, or accountability of AI systems.

Claude’s answer:Here are some potential pros and cons of the belligerent, unscientific behavior that can arise in debates about AI risks/benefits:

Pros:

  • Passion and enthusiasm for the technology can help drive innovation and adoption.

  • Dismissing concerns expedites research and commercialization without the hindrance of regulations.

  • Overconfidence in the technology's potential may lead to bigger thinking and ambition.

  • Questioning concerns helps strengthen arguments in favor of AI development.

  • Boldness pushes boundaries of what's considered possible with technology.

Cons:

  • Drowns out important ethical, social, and safety concerns regarding AI.

  • Lack of nuance or interdisciplinary thinking leads to narrow perspectives.

  • Potentially dangerous technologies may be developed without adequate caution.

  • Public distrusts the technology if real concerns are dismissed or downplayed.

  • Regulations and preparative measures may be avoided or delayed.

  • Sets up unrealistic expectations that could lead to disillusionment later.

  • Critical feedback that could improve the technology may be ignored.

  • Rapid development without oversight increases risk of accidents or abuse.

  • Undermines scientific rigor and objectivity around socio-technical issues.

In summary, belligerence and lack of nuance makes it harder to have productive public discussions about maximizing benefits and minimizing risks. A thoughtful, interdisciplinary approach is important for creating AI that serves broad social good.

(Text continues below the picture)

Question 3 of 3 for AI services: Do technologists have a special responsibility NOT to be belligerent and dismissive of AI related concerns, considering the fact that they are supposed to have more in-depth knowledge about the technology, or is that too much to ask or expect from them and therefore an unrealistic expectation in these sometimes heated debates?

ChatGPT-4’s answer:Technologists, given their in-depth knowledge of AI and its potential impact, do indeed carry a special responsibility in the discourse surrounding AI-related concerns. Here’s why their role in the debate should be conscientious and measured, rather than belligerent or dismissive:

  1. Expertise and Influence: Technologists are often seen as authoritative voices on AI; their opinions can significantly influence public perception and policy. Therefore, they have a responsibility to be well-informed and balanced in their viewpoints.

  2. Ethical Obligation: There is an ethical component to their work; they should ensure that the technology they develop is safe and beneficial to society. Dismissing valid concerns undermines this ethical obligation.

  3. Public Trust: Maintaining public trust in technology is essential. Belligerence and dismissiveness can erode that trust, leading to increased fear and resistance to beneficial technologies.

  4. Responsible Innovation: Part of responsible innovation is considering and addressing the societal impacts, both positive and negative, of the technology being developed. This includes engaging with the public and stakeholders in a respectful and informative manner.

  5. Risk Communication: Technologists are often in the best position to understand and communicate risks. It is their responsibility to share this information in a way that is factual and not dismissive, helping non-experts understand complex issues.

  6. Guidance for Regulation: Lawmakers and regulators rely on the expertise of technologists to develop sensible AI governance. If technologists are dismissive, they may not provide the necessary guidance to create effective regulations that balance innovation with public safety and ethical considerations.

  7. Interdisciplinary Collaboration: The best solutions often come from interdisciplinary work. Technologists need to engage constructively with professionals from other fields, such as ethics, law, and social sciences, to ensure a holistic approach to AI development.

Expecting technologists to live up to these responsibilities is not unrealistic; rather, it is an essential part of the professional duty of care they have as experts in their field. This does not mean they cannot argue passionately for their perspectives on AI; it means that their arguments should be made with a consideration for the potential consequences of their technology and a respect for the diverse range of views on the subject.

However, it's important to recognize that technologists are human and can be subject to the same emotional responses and biases as anyone else. They operate within the same societal pressures and constraints as other professionals. Organizations, academic institutions, and professional societies can play a role in setting standards and providing training to help technologists engage in public discourse effectively and ethically.

Google Bard’s answer:Yes, technologists have a special responsibility not to be belligerent and dismissive of AI-related concerns. They are supposed to have more in-depth knowledge about the technology, and they are in a position to help others understand the potential risks and benefits of AI.

When technologists are belligerent and dismissive of AI-related concerns, it can make it difficult for the public to have a rational and informed discussion about the technology. It can also lead to a loss of trust in the technology and those who are developing it.

Of course, it is unrealistic to expect that all technologists will always be perfectly calm and rational in debates about AI. But it is important for them to be aware of their responsibility to have a respectful and productive dialogue with others.

Here are some tips for technologists who want to engage in constructive debates about AI:

  • Be willing to listen to the concerns of others, even if you disagree with them.

  • Try to understand why people are concerned about AI.

  • Be honest about the potential risks and benefits of AI.

  • Be open to learning from others.

  • Be willing to admit when you don't have all the answers.

By following these tips, technologists can help to create a more informed and productive public discourse about AI.

It is also important to note that technologists are not a monolithic group. There is a wide range of opinions within the field about the risks and benefits of AI. Some technologists are very concerned about the potential risks, while others are more optimistic about the future of AI.

This diversity of opinion is a good thing. It means that there is a healthy debate happening within the field about AI safety and ethics. This debate is essential for developing safe and responsible AI systems.

Overall, technologists have a special responsibility to be respectful and constructive in debates about AI. They should be willing to listen to the concerns of others and to engage in honest and open dialogue. By doing so, they can help to create a more informed and productive public discourse about AI.

Bing Chat’s answer:I think technologists have a special responsibility not to be belligerent and dismissive of AI related concerns, because they are supposed to have more in-depth knowledge about the technology and its impacts. I think this responsibility is not too much to ask or expect from them, and it is realistic and beneficial for these debates. Here are some reasons why:

  • Technologists have a duty to uphold the ethical principles and professional standards of their field, and to respect the values and rights of their stakeholders, such as users, clients, customers, or society at large. Being belligerent and dismissive of AI related concerns may violate these principles and standards, and damage the reputation and trust of technologists and their work.

  • Technologists have an opportunity to educate and inform others about AI and its risks and benefits, and to foster a culture of learning and dialogue among different groups and perspectives. Being belligerent and dismissive of AI related concerns may hinder this opportunity, and create barriers or misunderstandings between technologists and others.

  • Technologists have a challenge to address and mitigate the potential harms or issues that may arise from AI development or deployment, and to ensure the safety, fairness, or accountability of AI systems. Being belligerent and dismissive of AI related concerns may increase this challenge, and make it harder to identify or solve the problems or conflicts that may occur.

Therefore, I think technologists have a special responsibility not to be belligerent and dismissive of AI related concerns, and that this responsibility is realistic and desirable for having factual and meaningful debates about AI. I hope this answer helps you understand my perspective on this topic.

Claude’s answer:I think technologists do have a special responsibility to engage thoughtfully and responsibly when discussing potential risks and concerns around AI technology:

  • As experts with in-depth knowledge, they are in a unique position to provide informed perspectives on the capabilities and limitations of AI systems. This expertise lends a credibility that makes their views highly influential.

  • They are developing incredibly powerful technologies that could have major societal impacts. This gives them an obligation to consider the broader implications.

  • As pioneers of a transformative technology, they set the tone for public discourse and perceptions around AI. Belligerence fuels distrust rather than thoughtful debate.

  • Most have access to cutting-edge research and understand nuances around safety, security, bias in data/models etc. They should leverage this knowledge.

  • AI researchers are often publicly funded. This creates an obligation to engage responsibly with the public who funds their work.

  • Multidisciplinary collaboration is needed to develop wise policies and applications of AI. Dismissiveness discourages this.

However, it may be unrealistic to expect perfect responsibility from all technologists. Reasonable disagreement can occur on good faith:

  • They have intrinsic optimism bias given their vocation and successes with AI.

  • Heated rhetoric can arise naturally in emerging technologies with profound implications.

  • Technical experts can still have knowledge gaps on ethical/social implications.

  • Passion for their work can make them defensive against criticism.

So while responsibility should be encouraged, some level of imperfect behavior is expected. The goal should be promoting respectful, thoughtful dialogue that engages public concerns, without expecting technologists to be perfect ethical arbiters.

Photo by Adi Goldstein on Unsplash