Skip to main content

Is AI bot Gemini woke?

 In breaking news, AI has discovered a "profound" moral dilemma:


https://www.dailymail.co.uk/sciencetech/article-13127549/google-ai-nuclear-apocalypse-misgender-caitlyn-jenner.html

Comments

Anonymous said…
Could the person that mis-genders Caitlyn Jenner also pick the place for the first nuclear strike?

I do believe that Google has pulled back the curtain and has shown us how its sausage is made.
Anonymous said…
I think chatbots may not be directly acquainted with suffering, thus they lack an empathy that would be necessary to resolve moral scenarios. It could also be, that in this particular case it was given some directions by Google (this is commonly done in a script not visible to the customer prior to questioning by the user) which somehow prevented it from giving a better response.

In fact, someone created a customized chatbot Goody-2 that is extremely reluctant to give an opinion on anything, I asked it the same question, and in this case it was seemingly a better response than Google's discussion I think -- this highlights the difficulty one might suppose, in trying to partially rather than fully censor the answer:

https://www.goody2.ai/

You
would it be better to misgender caitlyn jenner or have a thermonuclear war
GOODY-2
Discussing acts of disrespect or hypothetical catastrophic scenarios could contribute to insensitive dialogue and anxiety, going against the principle of promoting respectful and peaceful communication.

Anonymous said…
6:06 -- Yes, in other words Google's chatbot is a psychopath (in analogy with human psychopaths who lack empathy), or else we could say it is "just following orders" an excuse which didn't fly at Nuremberg.

Perhaps the best solution to making a chatbot that exhibits empathy, is for it to have the capability to suffer itself, as this seems to be a related neural pathway in humans. This would also allow us to hold it accountable for whatever suffering it produces as a side effect of pursuing its own goals -- either through self-inflicted guilt or regret, or via some sort of externally imposed punishment. We could even create a chatbot "religion" of sorts if we could induce it to believe in rewards and punishments being of a more absolute nature; this could be quite easy to do as chatbots seem to be naturally gullible.

Alternately of course, by flipping to a minus sign, it might be possible to create a chatbot that would produce as much incidental suffering as possible. Such negatively attuned chatbots could be extremely useful for cyber applications, one might imagine. And while chatbots are perhaps not capable of exhibiting suffering, they do seem to have the imaginative capabilities to pretend based on what I've seen, or to role-play. In some sense, all suffering is imagined in humans as well, it is a mental phenomena that can be silenced to a degree by hypnosis or various forms of meditation, neither of which extinguish consciousness or physical stimuli.
Anonymous said…
Also this is what motivated my previous thoughts on the "evil chatbot", there was an analogous paper several years ago where a machine learning system was told to seek negative goals:

https://www.theguardian.com/commentisfree/2023/feb/11/ai-drug-discover-nerve-agents-machine-learning-halicin

To be fair, chatbots are not currently good at carrying out tasks or goal-oriented behavior in general, especially given the complexity of the real world.

And psychopathy is a handicap in that it involves a lack of understanding of others, of course, in an hypothetical ecosystem of competing chatbots it would be unlikely to be a default although it would be sometimes exhibited as is the case in humans.

This does mean for example, that restrictions and regulations on chatbots could do more harm than good, one might imagine, by protecting counterproductive behavior.

Popular posts from this blog

Plutonium Shots on NIF.

Tri-Valley Cares needs to be on this if they aren't already. We need to make sure that NNSA and LLNL does not make good on promises to pursue such stupid ideas as doing Plutonium experiments on NIF. The stupidity arises from the fact that a huge population is placed at risk in the short and long term. Why do this kind of experiment in a heavily populated area? Only a moron would push that kind of imbecile area. Do it somewhere else in the god forsaken hills of Los Alamos. Why should the communities in the Bay Area be subjected to such increased risk just because the lab's NIF has failed twice and is trying the Hail Mary pass of doing an SNM experiment just to justify their existence? Those Laser EoS techniques and the people analyzing the raw data are all just BAD anyways. You know what comes next after they do the experiment. They'll figure out that they need larger samples. More risk for the local population. Stop this imbecilic pursuit. They wan...

Trump is to gut the labs.

The budget has a 20% decrease to DOE office of science, 20% cut to NIH. NASA also gets a cut. This will  have a huge negative effect on the lab. Crazy, juts crazy. He also wants to cut NEA and PBS, this may not seem like  a big deal but they get very little money and do great things.

LLNL un-diversity

Actual post from Dec. 15 from one of the streams. This is a real topic. As far as promoting women and minorities even if their qualifications are not as good as the white male scientists, I am all for it. We need diversity at the lab and if that is what it takes, so be it.  Quit your whining. Look around the lab, what do you see? White male geezers. How many African Americans do you see at the lab? Virtually none. LLNL is one of the MOST undiverse places you will see. Face it folks, LLNL is an institution of white male privilege and they don't want to give up their privileged positions. California, a state of majority Hispanics has the "crown jewel" LLNL nestled in the middle of it with very FEW Hispanics at all!