There was a guy on the WSJ podcast a few weeks ago who lost his mother unexpectedly. He was also an early AI adopter, so to help himself write an eulogy he fed a selection of her text messages to the chatbot then asked "her" how she feels now that she is dead. The LLM came up with a response that according to the guy was 'exactly' what his mother would've said. He became very emotional. Immediately after he felt disgust. Didn't say why, probably remembered the part of OpenAI's EULA where they claim all data that you feed into their systems, effectively handing over his dead mother to a soulless corporation. Or it was the realization that every interaction with the thanobot won't replace the time he should've spent with his mother. I lost mine before all of this AI nonsense, so I had to formulate my own thoughts about the event and this new situation her family has to get used to.
I think AI’s a ways away from being sensitive to human emotion enough to “guide” meat puppets safely through their emotional responses to the death of a loved one.
I for one would NOT want to meet an AI version of my mother. That, dear reader, would be the stuff of nightmares.
The make-or-break, I think, is going to be whether people use this technology to focus on themselves and their grief, or turn it into a crutch where they feel like they can converse with their deceased love one, there-by never having a need to come to terms with their actual grief. Technology like this may end up being something that removes a part of the human experience.
As a sci-fi fan, I've thought a little about the day when we will be able to upload our conscienceless to the web and "live forever." I know in my heart, however, that will end up like an episode of the Twilight Zone where my soul is still locked in my body as death descends while my electronic doppelganger continues to simulate my life for no purpose. When that technology becomes available, I think I'll forego the upload. All the best people are dead, anyhow. I might as well be one of them.
Never underestimate human laziness. Or the desire to avoid discomfort of any kind, sort or description. In other words, yes, people will use this technology to avoid grief altogether. Money will be made.
“As you’d expect, real world grief counselors are sounding the alarm about losing their jobs.”
I suspect the concern is more people won’t “move on” from their grief. And then develop complicated grief. ... Though if they are seeing a counselor ...
I think the biggest danger with these thanobots is the manipulation of peoples feelings and actions.
There was a guy on the WSJ podcast a few weeks ago who lost his mother unexpectedly. He was also an early AI adopter, so to help himself write an eulogy he fed a selection of her text messages to the chatbot then asked "her" how she feels now that she is dead. The LLM came up with a response that according to the guy was 'exactly' what his mother would've said. He became very emotional. Immediately after he felt disgust. Didn't say why, probably remembered the part of OpenAI's EULA where they claim all data that you feed into their systems, effectively handing over his dead mother to a soulless corporation. Or it was the realization that every interaction with the thanobot won't replace the time he should've spent with his mother. I lost mine before all of this AI nonsense, so I had to formulate my own thoughts about the event and this new situation her family has to get used to.
I think AI’s a ways away from being sensitive to human emotion enough to “guide” meat puppets safely through their emotional responses to the death of a loved one.
I for one would NOT want to meet an AI version of my mother. That, dear reader, would be the stuff of nightmares.
The make-or-break, I think, is going to be whether people use this technology to focus on themselves and their grief, or turn it into a crutch where they feel like they can converse with their deceased love one, there-by never having a need to come to terms with their actual grief. Technology like this may end up being something that removes a part of the human experience.
As a sci-fi fan, I've thought a little about the day when we will be able to upload our conscienceless to the web and "live forever." I know in my heart, however, that will end up like an episode of the Twilight Zone where my soul is still locked in my body as death descends while my electronic doppelganger continues to simulate my life for no purpose. When that technology becomes available, I think I'll forego the upload. All the best people are dead, anyhow. I might as well be one of them.
And you will be at some point. Anyway…
Never underestimate human laziness. Or the desire to avoid discomfort of any kind, sort or description. In other words, yes, people will use this technology to avoid grief altogether. Money will be made.
“As you’d expect, real world grief counselors are sounding the alarm about losing their jobs.”
I suspect the concern is more people won’t “move on” from their grief. And then develop complicated grief. ... Though if they are seeing a counselor ...
I think the biggest danger with these thanobots is the manipulation of peoples feelings and actions.
Bingo.