- Pascal's Chatbot Q&As
- Posts
- Asking AI Services: Please calculate the chance of AI ‘not being doable’.
Asking AI Services: Please calculate the chance of AI ‘not being doable’.
Claude: 20% - ChatGPT-3.5: 20% - ChatGPT-4: 15% - MS Copilot: 15% - Google Gemini: < 5%
Question for AI services: Please calculate the chance of AI ‘not being doable’. Give me a percentage before explaining your reasons. Background: some people argue that AI in general, and Large Language Models specifically, will hit a ceiling due to lack of compute, funding, the environmental impact and technological challenges that will prevent ‘conciousness’ and ‘understanding’ from being realised which in turn makes the technology less suitable for use for decision-making and accuracy in critical sectors such as Finance, Legal, Healthcare, Military Industry and Scientific Research.
Claude’s answer:Based on the current state of AI technology, particularly in the field of large language models and deep learning, I would estimate the chance of AI "not being doable" in the sense of achieving human-level general intelligence or consciousness to be around 20%.
Here's my reasoning:
Rapid progress: The field of AI, and specifically deep learning and large language models, has seen remarkable progress in recent years. Models like GPT-3, PaLM, and Claude have demonstrated impressive capabilities in natural language processing, reasoning, and problem-solving. While we may not fully understand the underlying mechanisms, these models are exhibiting behaviors that were previously thought to be challenging for machines.
Continuous advancements: The pace of research and development in AI is accelerating, driven by increasing computational power, larger datasets, and algorithmic innovations. As we continue to make breakthroughs in areas like unsupervised learning, reinforcement learning, and multi-modal integration, the capabilities of AI systems are likely to expand further.
Theoretical support: There is no known fundamental barrier or law of physics that would prevent the creation of artificial general intelligence (AGI) or conscious machines. While the challenges are significant, many researchers believe that with sufficient computational resources, data, and algorithmic advancements, AGI or consciousness-like systems could be achievable.
Uncertainty and challenges: However, there are significant uncertainties and challenges that need to be addressed. These include the scalability of current techniques, the potential for diminishing returns, the need for new paradigms beyond deep learning, and the complexity of replicating human-level cognition and consciousness. Additionally, societal and ethical concerns, as well as potential limitations in compute, funding, and environmental impact, could hinder progress.
While the challenges are formidable, the rapid progress in AI and the absence of known fundamental barriers suggest that the chances of AI "not being doable" in the long run are relatively low, perhaps around 20%. However, the timeline for achieving AGI or consciousness-like systems remains highly uncertain and could take decades or longer.
It's important to note that my assessment is based on the current state of knowledge and could change as new developments or insights emerge in the field of AI. Additionally, the ethical and societal implications of advanced AI systems, particularly in critical sectors, need to be carefully considered and addressed.
ChatGPT-3.5’s answer:Assigning an exact percentage is inherently speculative and subject to significant uncertainty. However, if pressed to provide a specific figure, I would cautiously suggest a 20% chance that AI, particularly in the form of advanced consciousness and understanding akin to human cognition, might not be achievable in the foreseeable future. This percentage reflects acknowledgment of the substantial challenges and uncertainties surrounding AI development while also leaving room for the possibility of breakthroughs and advancements that could lead to its realization.
Assigning a specific percentage to the likelihood of AI not being achievable is challenging due to the complexity and uncertainty surrounding the development of artificial intelligence. However, I can offer some insights into the factors that may influence the potential limitations or challenges faced by AI technology.
Compute Power: One major argument against the feasibility of advanced AI is the limitation of available computational resources. Developing AI models that can truly mimic human-like understanding and consciousness may require computing power beyond what is currently feasible or sustainable.
Funding and Resources: Adequate funding and resources are essential for AI research and development. If funding for AI research stagnates or declines, progress in the field could slow down, potentially hindering the realization of advanced AI systems.
Technological Challenges: AI development faces numerous technological hurdles, including but not limited to issues like explainability, robustness, and safety. Overcoming these challenges is crucial for AI systems to be reliable and trustworthy in critical sectors.
Environmental Impact: The environmental impact of AI, particularly large-scale training of models like Large Language Models, is a concern. The energy consumption associated with training and running AI models at scale raises questions about sustainability and environmental responsibility.
Ethical and Societal Concerns: There are ethical and societal implications associated with the development and deployment of AI technologies. Concerns about bias, fairness, privacy, and job displacement may impact the acceptance and adoption of AI systems in critical sectors.
Considering these factors, while it's difficult to assign a precise percentage, it's clear that there are significant challenges and uncertainties that could potentially limit the realization of advanced AI systems. However, it's also worth noting that ongoing research, technological advancements, and societal developments may mitigate some of these challenges over time.
ChatGPT-4’s answer:Calculating the exact probability of AI "not being doable" in the sense described is highly speculative and depends on a range of factors including technological, environmental, economic, and social elements. However, I can offer a rough estimation, based on the current trajectory and challenges facing AI development:
Estimated chance of AI "not being doable" as described: 15%
This estimation reflects significant but manageable challenges. Here are the reasons for this assessment:
Computational Limits: There are indeed physical and economic limits to computing power, which is a fundamental resource for training more complex AI models. However, advancements in hardware, such as quantum computing and specialized AI processors, could potentially overcome current limitations.
Funding and Economic Incentives: AI technology continues to attract significant investment due to its potential to transform industries and create economic value. This suggests that funding might persist as long as AI continues to demonstrate utility and potential profitability.
Environmental Impact: The environmental cost of training large AI models is a growing concern. Efforts to make AI more energy-efficient and the shift towards using renewable energy sources for data centers are steps towards mitigating these impacts.
Technological Challenges: While true "consciousness" and "understanding" are speculative attributes that may never be fully realized in AI, the technology does not necessarily need to achieve these milestones to be highly effective and transformative in various sectors. Current AIs, including large language models, already perform remarkably well in specific tasks despite lacking true understanding or consciousness.
Regulation and Ethical Concerns: The adoption of AI in sensitive fields like healthcare, finance, and military applications is subject to stringent regulations aimed at ensuring safety and fairness. These regulatory frameworks can guide the development of AI technologies in a way that maximizes their benefits while minimizing risks.
Public Perception and Social Acceptance: While there are concerns about AI, its increasing integration into everyday applications (like smartphones, cars, and home assistants) is gradually normalizing its presence in society, potentially easing widespread adoption in more critical sectors over time.
Each of these factors introduces a degree of uncertainty and potential limitation to the full realization and application of AI technologies. However, ongoing research, investment, and societal integration suggest that while the road may be bumpy and the ultimate capabilities of AI may fall short of some expectations, the technology is likely to continue evolving and finding new applications.MS Copilot’s answer:Predicting the exact percentage likelihood of AI ‘not being doable’ is inherently challenging due to the many variables involved. However, I can provide an informed estimate based on the factors you mentioned:
Compute and Funding: The trend has been towards increased computational power and investment in AI. While there may be limitations, ongoing research and technological advancements are likely to continue improving AI capabilities. Likelihood: Low
Environmental Impact: Concerns about energy consumption and environmental impact are valid. However, efforts to develop energy-efficient AI models and use renewable energy sources can mitigate these issues. Likelihood: Moderate
Technological Challenges: Addressing challenges related to context understanding, causal reasoning, and achieving ‘consciousness’ is an ongoing process. While progress is being made, complete solutions are not guaranteed. Likelihood: Medium
Application in Critical Sectors: AI’s suitability for decision-making in critical sectors depends on transparency, reliability, and human oversight. Striking the right balance will be crucial. Likelihood: High
Considering these factors, I estimate that the chance of AI ‘not being doable’ is approximately 15%. Keep in mind that this is a rough estimate, and the actual outcome may differ based on future developments and societal choices.
Google Gemini’s answer:It's difficult to assign a concrete percentage chance of AI not being doable. Here's why:
Defining "Doable" AI: The definition of "doable" AI is subjective. Does it mean replicating human consciousness (highly unlikely), achieving human-level intelligence (uncertain timeline), or simply performing specific tasks effectively (already happening)?
Progress is Gradual: AI advancements are incremental. We might not achieve a single breakthrough moment, but rather a series of improvements pushing the boundaries.
However, the concerns you mentioned are valid. Here's a breakdown:
Compute and Funding: These are hurdles, but technology like cloud computing and increasing investment are mitigating factors.
Environmental Impact: This is a serious concern, but research into energy-efficient AI is ongoing.
Technological Challenges: Challenges like true "understanding" and general intelligence are significant, but progress is being made in areas like machine learning and natural language processing.
So, while there will likely be limitations, some form of advanced AI seems achievable.
In conclusion, the chance of AI being entirely "not doable" seems very low.
Assigning a specific percentage is very difficult for the reasons mentioned previously. However, to give you a very rough estimate:
Chance of AI never being achievable in any form: Less than 5%
This is because even without achieving human-level intelligence or sentience, there's a very high probability that AI will continue to advance in some capacity, even if limited.
This is important to remember:
The definition of "doable" AI can vary greatly.
Progress is likely incremental, not a single breakthrough.
The true answer lies somewhere between "not doable at all" and achieving human-like consciousness.