I find AI slop on the internet a lot and in some submitted papers. This year I am seeing this slop for the first time the lab. It is mainly from lab managers. It is beyond obvious they are using is it. Suddenly you had managers who could barley write suddenly put out these long winded posts with fancy words that sound nothing like the manager in terms of how they to speak or write. You also see lots of weird complements that seem out of place in the writing and the topic is off or it seems like the comments on the subject matter are using weird terms. If you email some of them you get back a response in minutes that are rather long but seems to repeat what you said but adds nothing. Also if you talk to them in person about something said in the email they often do not recall, or seem confused, indicating they are not even reading the AI slop that was generated for them.
On the technical side I have noticed. (1) People who I would say are rather poor scientists are now saying AI can write a paper for them. I find that a bit odd. (2) I have noticed more papers that are clearly written by AI. They are grammatically correct, and the data is correct but the topic is just bizarre, in the sense that no real human would want ask or study this problem. It always seems like bad high school science but with high quality writing. In the past I would reject papers that are not at the level of the jounral, are interesting but the data is not good enough, have a clear error, or are badly organized. For the first am now rejecting papers, simply because they add nothing even remotely interesting or of value to the field. In some lower level journals the work only needs to be correct and readable to be accepted and these AI papers meet that criteria so I guess some of it gets published.
It seems like AI is lowering the quality of lot of things as opposed to improving.
I suspect that what you are going find is that places like Harvard or Caltech are going become even more elite simply by using less AI. I have to ask what is AI going to to do the quality of the work done at the NNSA lans and the staff. I think a real danger is it will just end up lowering the standards and quality. I have already seen this with the NNSA manager emails, where the managers seem to not even know what the email said.
Comments
“PhD engineers are expected to engage in advanced research, develop new technologies, and contribute original knowledge to their field, which can lead to specialized roles in academia or high-tech industries. In contrast, non-PhD engineers typically focus more on practical applications and implementation of existing technologies rather than conducting original research.”
How many lab PhD Engineers were doing PhD level work before AI assistance?
"How many lab PhD Engineers were doing PhD level work before AI assistance?"
Good question.
I think the first thing AI will replace is scientists, followed by engineers, than doctors, layers and after that truck drivers and plumbers. I think it hits a wall after that and it will not be able to replace managers or artists so there is still a future for some humans
Too much of what we call work is low value, no thought work. AI is an assistant and productivity enhancer, not a replacement for people. We should be using it to find the work we should eliminate. Eliminate BS work, not people and get the Labs back to thinking and doing first class scientific work.
The problem is the labs mission especially Los Alamos is to be a "Force for good" which means hire lots and lots of people. You have a direct contradiction in what AI can offer in terms of efficiency and one of the core missions of the lab. At LANL they keep talking about being more "efficient" but you cannot have both being a force for good which just hires tons of people to pay them and paying them is good versus do we need to have these jobs that add nothing or that AI could rapidly replace. The other odd thing is the number of managers who speak about replacing scientists and engineers with AI but never say anything about admin, paper pushers or ...managers. It is clear what they want out of AI and it is not increased technical efficiency.
The opposite will happen. One of LANLS core mission "Force For Good" is about hiring lots of people who collect a paycheck. That is what it is, everyone knows this so we need to just hire and hire. Heck there people do not even show up work but show up once or twice a week. They are getting paid to exactly what they are paid to to which is make money for the local economy.
I have been at the labs for more than 30 years and we have reached a level of peak nonefficiency. Mangers are saying AI will just mean we can hire even more people. As for as I can tell things have gotten worse at LANL with AI, are mangers can send it longer more pointless emails that they do not even read. The HR staff use AI to throw together some paper even faster but it never works and they have no idea what is in them.
The issue is the culture. LANL simply does not have a culture of getting things done, so you have this weird small subgroup of scientists and engineers who want to get stuff done and everyone. There is no reward for getting stuff done, there is only punishment for getting stuff wrong so it is best to do as little as possible. There is no risk, no work, only compliance and force for good. AI is only going to make it worse, more people will be hired and less will be done.
Of course it is not just LANL but all sorts of business.