Currencies29294
Market Cap$ 2.38T-0.01%
24h Spot Volume$ 34.94B-7.71%
DominanceBTC50.90%+0.05%ETH17.06%-0.07%
ETH Gas4 Gwei
Cryptorank
MainNewsBoston Acade...

Boston Academic Contends Catastrophic AI Anxieties are Overblown and Misdirected


Boston Academic Contends Catastrophic AI Anxieties are Overblown and Misdirected
Jul, 08, 2023
4 min read
by CryptoPolitan
Boston Academic Contends Catastrophic AI Anxieties are Overblown and Misdirected

Academics at UMass Boston’s Applied Ethics Center have been studying the impact of engagement with AI on people’s understanding of themselves, and Nir Eisikovits particularly believes these catastrophic anxieties are overblown and misdirected. While there are big human problems (not caused by AI, but by humans), they require the attention of policymakers. They have been around for a while, so the reasoning goes, and they are hardly cataclysmic.

In recent months, the rise of AI, exemplified by systems like ChatGPT, has sparked widespread anxiety about its potential dangers. Some have even expressed existential fears and catastrophic scenarios, likening AI to pandemics and nuclear war. One academic offers that the true danger lies in the subtle erosion of essential human qualities, prompting a reevaluation of what it means to be human in the age of AI.

Challenging catastrophic scenarios

While thought experiments like the “paper clip maximizer” have been used to illustrate potential risks, the reality is that existing AI applications are far from possessing the capability to cause large-scale catastrophes. These scenarios, while intriguing, belong more to the realm of science fiction than imminent threats. AI systems today focus on specific tasks and lack the sophisticated judgment and access to the critical infrastructure required for the extreme examples put forth.

Rather than cataclysmic events, the true danger lies in the gradual transformation of human existence through the increasing integration of AI. Existing AI technologies have already demonstrated their potential for harm, such as the creation of convincing deep-fake media and the perpetuation of algorithmic bias in decision-making systems. These issues necessitate attention and regulation from policymakers, but they are not existential threats to humanity.

The diminishing of human qualities

The philosopher argues that the existential danger posed by AI is of a different nature – it is a philosophical risk. AI has the potential to alter how individuals perceive themselves and can gradually erode fundamental human abilities and experiences. One such ability is judgment-making, a trait deeply ingrained in human nature. As more judgments are automated and delegated to algorithms, people may lose the capacity to make these judgments themselves, leading to a decline in their ability to reason and make informed decisions.

Another crucial aspect of human existence that AI impacts is the role of chance and serendipity. Humans value unexpected encounters and the element of surprise in their lives, yet algorithmic recommendation systems aim to minimize such serendipitous experiences by relying on predictability and planning. The gradual replacement of chance with algorithms could rob individuals of meaningful and unexpected discoveries.

Furthermore, the advancement of AI’s writing capabilities raises concerns about the decline of critical thinking skills. If AI technology replaces writing assignments in higher education, educators may lose a valuable tool for teaching students how to think critically and express themselves effectively.

The importance of considered integration

While AI does not pose an imminent catastrophic threat, the uncritical adoption and integration of AI in various domains do carry consequences. The philosopher warns that if these developments continue unchecked, human skills, such as judgment-making, the appreciation of chance encounters, and critical thinking, will be diminished over time. Although the human species will survive these losses, the quality of human existence will be impoverished as a result.

In the midst of rising anxieties surrounding AI, it is crucial to differentiate between exaggerated catastrophic scenarios and the philosophical risks associated with its integration into society. While AI is unlikely to bring about an apocalyptic end, the erosion of essential human qualities is a genuine concern. By recognizing and addressing the subtle costs of AI, policymakers, researchers, and society at large can ensure that AI technologies are thoughtfully integrated to enhance human existence rather than diminish it.

Nir Eisikovits is a professor of philosophy and founding director of the Applied Ethics Center. Before coming to UMass Boston he was associate professor of legal and political philosophy at Suffolk University, where he co-founded and directed the Graduate Program in Ethics and Public Policy. Professor Eisikovits’s research focuses on the moral and political dilemmas arising after war. He is author of “A Theory of Truces” (Palgrave MacMillan) and “Sympathizing with the Enemy: Reconciliation, Transitional Justice, Negotiation” (Brill) and co-editor of “Theorizing Transitional Justice” (Routledge). He is also guest editor for a recent issue of Theoria on “The Idea of Peace in the Age of Asymmetrical Warfare.” Read more about him.

Read the article at CryptoPolitan

Read More

Is the NVIDIA top in as Etched launches ASIC for LLMs 10x faster than H100 GPUs?

Is the NVIDIA top in as Etched launches ASIC for LLMs 10x faster than H100 GPUs?

Etched is making waves in the artificial intelligence hardware space with its revolut...
Jun, 26, 2024
4 min read
by CryptoSlate
Central banks must ‘raise their game’ on AI, BIS warns

Central banks must ‘raise their game’ on AI, BIS warns

The Bank for International Settlements (BIS) called on central banks to “raise their ...
Jun, 26, 2024
3 min read
by CryptoPolitan
MainNewsBoston Acade...

Boston Academic Contends Catastrophic AI Anxieties are Overblown and Misdirected


Boston Academic Contends Catastrophic AI Anxieties are Overblown and Misdirected
Jul, 08, 2023
4 min read
by CryptoPolitan
Boston Academic Contends Catastrophic AI Anxieties are Overblown and Misdirected

Academics at UMass Boston’s Applied Ethics Center have been studying the impact of engagement with AI on people’s understanding of themselves, and Nir Eisikovits particularly believes these catastrophic anxieties are overblown and misdirected. While there are big human problems (not caused by AI, but by humans), they require the attention of policymakers. They have been around for a while, so the reasoning goes, and they are hardly cataclysmic.

In recent months, the rise of AI, exemplified by systems like ChatGPT, has sparked widespread anxiety about its potential dangers. Some have even expressed existential fears and catastrophic scenarios, likening AI to pandemics and nuclear war. One academic offers that the true danger lies in the subtle erosion of essential human qualities, prompting a reevaluation of what it means to be human in the age of AI.

Challenging catastrophic scenarios

While thought experiments like the “paper clip maximizer” have been used to illustrate potential risks, the reality is that existing AI applications are far from possessing the capability to cause large-scale catastrophes. These scenarios, while intriguing, belong more to the realm of science fiction than imminent threats. AI systems today focus on specific tasks and lack the sophisticated judgment and access to the critical infrastructure required for the extreme examples put forth.

Rather than cataclysmic events, the true danger lies in the gradual transformation of human existence through the increasing integration of AI. Existing AI technologies have already demonstrated their potential for harm, such as the creation of convincing deep-fake media and the perpetuation of algorithmic bias in decision-making systems. These issues necessitate attention and regulation from policymakers, but they are not existential threats to humanity.

The diminishing of human qualities

The philosopher argues that the existential danger posed by AI is of a different nature – it is a philosophical risk. AI has the potential to alter how individuals perceive themselves and can gradually erode fundamental human abilities and experiences. One such ability is judgment-making, a trait deeply ingrained in human nature. As more judgments are automated and delegated to algorithms, people may lose the capacity to make these judgments themselves, leading to a decline in their ability to reason and make informed decisions.

Another crucial aspect of human existence that AI impacts is the role of chance and serendipity. Humans value unexpected encounters and the element of surprise in their lives, yet algorithmic recommendation systems aim to minimize such serendipitous experiences by relying on predictability and planning. The gradual replacement of chance with algorithms could rob individuals of meaningful and unexpected discoveries.

Furthermore, the advancement of AI’s writing capabilities raises concerns about the decline of critical thinking skills. If AI technology replaces writing assignments in higher education, educators may lose a valuable tool for teaching students how to think critically and express themselves effectively.

The importance of considered integration

While AI does not pose an imminent catastrophic threat, the uncritical adoption and integration of AI in various domains do carry consequences. The philosopher warns that if these developments continue unchecked, human skills, such as judgment-making, the appreciation of chance encounters, and critical thinking, will be diminished over time. Although the human species will survive these losses, the quality of human existence will be impoverished as a result.

In the midst of rising anxieties surrounding AI, it is crucial to differentiate between exaggerated catastrophic scenarios and the philosophical risks associated with its integration into society. While AI is unlikely to bring about an apocalyptic end, the erosion of essential human qualities is a genuine concern. By recognizing and addressing the subtle costs of AI, policymakers, researchers, and society at large can ensure that AI technologies are thoughtfully integrated to enhance human existence rather than diminish it.

Nir Eisikovits is a professor of philosophy and founding director of the Applied Ethics Center. Before coming to UMass Boston he was associate professor of legal and political philosophy at Suffolk University, where he co-founded and directed the Graduate Program in Ethics and Public Policy. Professor Eisikovits’s research focuses on the moral and political dilemmas arising after war. He is author of “A Theory of Truces” (Palgrave MacMillan) and “Sympathizing with the Enemy: Reconciliation, Transitional Justice, Negotiation” (Brill) and co-editor of “Theorizing Transitional Justice” (Routledge). He is also guest editor for a recent issue of Theoria on “The Idea of Peace in the Age of Asymmetrical Warfare.” Read more about him.

Read the article at CryptoPolitan

Read More

Is the NVIDIA top in as Etched launches ASIC for LLMs 10x faster than H100 GPUs?

Is the NVIDIA top in as Etched launches ASIC for LLMs 10x faster than H100 GPUs?

Etched is making waves in the artificial intelligence hardware space with its revolut...
Jun, 26, 2024
4 min read
by CryptoSlate
Central banks must ‘raise their game’ on AI, BIS warns

Central banks must ‘raise their game’ on AI, BIS warns

The Bank for International Settlements (BIS) called on central banks to “raise their ...
Jun, 26, 2024
3 min read
by CryptoPolitan