Now we know why LANL managers live AI so much
https://tribune.com.pk/story/2551840/excessive-ai-use-may-lead-to-cognitive-decline-reveals-mit-study
Excessive AI use may lead to cognitive decline, reveals MIT study
Findings have vast practical implications such as decline in critical thinking, creativity, and independent reasoning
Brain scans taken during the experiment showed that LLM users exhibited weaker connections between brain regions associated with critical thinking and memory.
While their essays scored well in both human and AI evaluations — often praised for their coherence and alignment with the given prompt — the writing was also described as formulaic and less original.
Notably, those who used LLMs struggled to quote from or recall their own writing in subsequent sessions.
Their brain activity reportedly "reset" to a novice state regarding the essay topics, a finding that strongly contrasts with participants in the "brain-only" group, who retained stronger memory and demonstrated deeper cognitive engagement throughout.
3 comments:
When I talk with scientists a number of say how AI is helpful but has not changed how they actually do science.
However when I talk to professors they often say there are now two kinds of students which are those that put in the effort and use google and things and a bit of AI to help them and the second are the out and out cheaters that just have AI do everything. The latter are not really leaning anything and also do not seem to care.
That is one part of the story that most people seem to miss, not only AI keeps improving but we are also getting dwmber.
The only plausible future here is that a small group of rich technologists will create the next stage in evolution, and ultimately try to leave planet earth.
Well LANL wants everybody to use ChatGPT and large language models to make LANL the AI lab. Seems like there could be some issues.
https://futurism.com/commitment-jail-chatgpt-psychosis
As we reported earlier this month, many ChatGPT users are developing all-consuming obsessions with the chatbot, spiraling into severe mental health crises characterized by paranoia, delusions, and breaks with reality.
At the core of the issue seems to be that ChatGPT, which is powered by a large language model (LLM), is deeply prone to agreeing with users and telling them what they want to hear. When people start to converse with it about topics like mysticism, conspiracy, or theories about reality, it often seems to lead them down an increasingly isolated and unbalanced rabbit hole that makes them feel special and powerful — and which can easily end in disaster.
Post a Comment