I just received my annual TCP-1 letter from LLNS and a summary of the LLNS Pension Plan. Looked in pretty good shape in 2013. About 35% overfunded (funding target attainment percentage = 134.92%). This was a decrease from 2012 where it was 51% overfunded (funding target attainment percentage = 151.59%). They did note that the 2012 change in the law on how liabilities are calculated using interest rates improved the plan's position. Without the change the funding target attainment percentages would have been 118% (2012) and 105% (2013). 2013 assets = $2,057,866,902 2013 liabilities = $1,525,162,784 vs 2012 assets = $1,844,924,947 2012 liabilities = $1,217,043,150 It was also noted that a slightly different calculation method ("fair market value") designed to show a clearer picture of the plan' status as December 31, 2013 had; Assets = $2,403,098,433 Liabilities = $2,068,984,256 Funding ratio = 116.15% Its a closed plan with 3,781 participants. Of that number, 3,151 wer...
Comments
I do believe that Google has pulled back the curtain and has shown us how its sausage is made.
In fact, someone created a customized chatbot Goody-2 that is extremely reluctant to give an opinion on anything, I asked it the same question, and in this case it was seemingly a better response than Google's discussion I think -- this highlights the difficulty one might suppose, in trying to partially rather than fully censor the answer:
https://www.goody2.ai/
You
would it be better to misgender caitlyn jenner or have a thermonuclear war
GOODY-2
Discussing acts of disrespect or hypothetical catastrophic scenarios could contribute to insensitive dialogue and anxiety, going against the principle of promoting respectful and peaceful communication.
Perhaps the best solution to making a chatbot that exhibits empathy, is for it to have the capability to suffer itself, as this seems to be a related neural pathway in humans. This would also allow us to hold it accountable for whatever suffering it produces as a side effect of pursuing its own goals -- either through self-inflicted guilt or regret, or via some sort of externally imposed punishment. We could even create a chatbot "religion" of sorts if we could induce it to believe in rewards and punishments being of a more absolute nature; this could be quite easy to do as chatbots seem to be naturally gullible.
Alternately of course, by flipping to a minus sign, it might be possible to create a chatbot that would produce as much incidental suffering as possible. Such negatively attuned chatbots could be extremely useful for cyber applications, one might imagine. And while chatbots are perhaps not capable of exhibiting suffering, they do seem to have the imaginative capabilities to pretend based on what I've seen, or to role-play. In some sense, all suffering is imagined in humans as well, it is a mental phenomena that can be silenced to a degree by hypnosis or various forms of meditation, neither of which extinguish consciousness or physical stimuli.
https://www.theguardian.com/commentisfree/2023/feb/11/ai-drug-discover-nerve-agents-machine-learning-halicin
To be fair, chatbots are not currently good at carrying out tasks or goal-oriented behavior in general, especially given the complexity of the real world.
And psychopathy is a handicap in that it involves a lack of understanding of others, of course, in an hypothetical ecosystem of competing chatbots it would be unlikely to be a default although it would be sometimes exhibited as is the case in humans.
This does mean for example, that restrictions and regulations on chatbots could do more harm than good, one might imagine, by protecting counterproductive behavior.