Currencies29159
Market Cap$ 2.54T+0.80%
24h Spot Volume$ 25.38B+1.09%
DominanceBTC51.41%-0.53%ETH16.88%+1.40%
ETH Gas3 Gwei
Cryptorank
MainNewsDebunking th...

Debunking the Negative Perception of AI in Education


Debunking the Negative Perception of AI in Education
May, 12, 2024
3 min read
by CryptoPolitan
Debunking the Negative Perception of AI in Education

Despite the benefits that artificial intelligence offers, its fear is so overwhelming that its positive perception hardly ever comes to light. Negativity around AI is also fueled by daily emerging reports about the technology to replace workers, which lures industrialists but scares the masses. Dilemmas like deep fakes and propaganda tools are also considered part and parcel of AI. All these factors make the picture so blurry that focusing on the positives of the technology requires something more than a magnifying glass.

Perception of AI in education

The perception of AI in education is also the same, for various obvious reasons. Where educators are in fear of being replaced eventually by AI systems, at the same time, students can be blamed for cheating and violating educational integrity.

The apprehensions about the first scenario are a bit premature, but the second one is a topic of widespread debate around university campuses and schools. 

The technology is still premature and is developing, and it is difficult to predict what it will be able to accomplish. But students are using tools like ChatGPT and Gemini to do their assignments and write essays at basic levels.

Educators are also using AI-powered systems to detect AI-generated content in assessments, but the obvious concern is the increasing incidence of cheating. 

The question here is not to stop students from using artificial intelligence tools, but to make them understand that they must be used for deepening their learning and making a clear understanding of the subject at hand.

Even before AI, we had challenges like plagiarism, and that has been sorted out with time, so it is understandable that the same hide-and-seek will remain a game between developers for some time until they sort out the AI detection problem.

For now, students might submit something that they don’t even understand, but a change from school level to doctorate level on a topic is easy to detect for educators, and if someone is doing that, they will be in for surprises when the grades are announced.

The unknown side

But there is a positive side to using these AI tools. Mostly, students fear that AI might introduce unknown rubbish or something out of the scope of their topic, so they do their due diligence in editing and checking the content for such mishaps before submitting it. In a way, students are also doing the work that teachers are required to do.

Last year, a research paper published in ‘Humanities and Social Science Communications’ noted that AI is making students lazy due to the automation of work, which affects their cognitive decision-making ability and exposes them to increased privacy issues.

But the reality is that the rate of cheating in academics is quite low—around five percent. But that is not well known publicly, as mainstream media does not talk about it much. Another reason is that students know that cheaters don’t stand a chance as they are not able to answer a bit of critical questioning.

Back in January, OpenAI announced a partnership with a non-profit, Common Sense Media, for an initiative to develop an AI rating system for students, parents, and educators to better understand what types of risks and benefits the technology offers.

Jim Steyer, Common Sense Media CEO, said at that time that

“[Materials] will be designed to educate families and educators about the safe, responsible use of ChatGPT so that we can collectively avoid any unintended consequences of this emerging technology.”

Source: Commonsensemedia.

Initiatives like this are steps in the right direction, but they were not much seen with the advent of previous technologies. The positive change is that the industry is responding in a constructive way.

The negative perception surrounding AI technology has to be changed. It is the technology that is changing every nook and cranny of the world we live in and every way it operates on a day-to-day basis. Changes are usually not accepted easily, but everyone has to adapt.

Read the article at CryptoPolitan

Read More

Meta pauses European AI assistant launch due to regulatory objections

Meta pauses European AI assistant launch due to regulatory objections

Meta said it will delay the launch and training of its AI assistant in Europe followi...
Jun, 14, 2024
2 min read
by CryptoSlate
Adobe projects strong future sales with AI-based tools

Adobe projects strong future sales with AI-based tools

Adobe Inc. shares experienced their biggest surge over four years following the compa...
Jun, 14, 2024
3 min read
by CryptoPolitan
MainNewsDebunking th...

Debunking the Negative Perception of AI in Education


Debunking the Negative Perception of AI in Education
May, 12, 2024
3 min read
by CryptoPolitan
Debunking the Negative Perception of AI in Education

Despite the benefits that artificial intelligence offers, its fear is so overwhelming that its positive perception hardly ever comes to light. Negativity around AI is also fueled by daily emerging reports about the technology to replace workers, which lures industrialists but scares the masses. Dilemmas like deep fakes and propaganda tools are also considered part and parcel of AI. All these factors make the picture so blurry that focusing on the positives of the technology requires something more than a magnifying glass.

Perception of AI in education

The perception of AI in education is also the same, for various obvious reasons. Where educators are in fear of being replaced eventually by AI systems, at the same time, students can be blamed for cheating and violating educational integrity.

The apprehensions about the first scenario are a bit premature, but the second one is a topic of widespread debate around university campuses and schools. 

The technology is still premature and is developing, and it is difficult to predict what it will be able to accomplish. But students are using tools like ChatGPT and Gemini to do their assignments and write essays at basic levels.

Educators are also using AI-powered systems to detect AI-generated content in assessments, but the obvious concern is the increasing incidence of cheating. 

The question here is not to stop students from using artificial intelligence tools, but to make them understand that they must be used for deepening their learning and making a clear understanding of the subject at hand.

Even before AI, we had challenges like plagiarism, and that has been sorted out with time, so it is understandable that the same hide-and-seek will remain a game between developers for some time until they sort out the AI detection problem.

For now, students might submit something that they don’t even understand, but a change from school level to doctorate level on a topic is easy to detect for educators, and if someone is doing that, they will be in for surprises when the grades are announced.

The unknown side

But there is a positive side to using these AI tools. Mostly, students fear that AI might introduce unknown rubbish or something out of the scope of their topic, so they do their due diligence in editing and checking the content for such mishaps before submitting it. In a way, students are also doing the work that teachers are required to do.

Last year, a research paper published in ‘Humanities and Social Science Communications’ noted that AI is making students lazy due to the automation of work, which affects their cognitive decision-making ability and exposes them to increased privacy issues.

But the reality is that the rate of cheating in academics is quite low—around five percent. But that is not well known publicly, as mainstream media does not talk about it much. Another reason is that students know that cheaters don’t stand a chance as they are not able to answer a bit of critical questioning.

Back in January, OpenAI announced a partnership with a non-profit, Common Sense Media, for an initiative to develop an AI rating system for students, parents, and educators to better understand what types of risks and benefits the technology offers.

Jim Steyer, Common Sense Media CEO, said at that time that

“[Materials] will be designed to educate families and educators about the safe, responsible use of ChatGPT so that we can collectively avoid any unintended consequences of this emerging technology.”

Source: Commonsensemedia.

Initiatives like this are steps in the right direction, but they were not much seen with the advent of previous technologies. The positive change is that the industry is responding in a constructive way.

The negative perception surrounding AI technology has to be changed. It is the technology that is changing every nook and cranny of the world we live in and every way it operates on a day-to-day basis. Changes are usually not accepted easily, but everyone has to adapt.

Read the article at CryptoPolitan

Read More

Meta pauses European AI assistant launch due to regulatory objections

Meta pauses European AI assistant launch due to regulatory objections

Meta said it will delay the launch and training of its AI assistant in Europe followi...
Jun, 14, 2024
2 min read
by CryptoSlate
Adobe projects strong future sales with AI-based tools

Adobe projects strong future sales with AI-based tools

Adobe Inc. shares experienced their biggest surge over four years following the compa...
Jun, 14, 2024
3 min read
by CryptoPolitan