Oh yes for sure I think we've all experienced that proof of technology spying on us. And it's probably something we agreed to in the terms and conditions that we didnt read.
It brings to memory the Telescreens in Orwell's 1984. I've seen bumper stickers saying "Make Orwell fiction again" obviously a response to the MAGA phraseology, but it's definitely relevant as a social commentary. Anyone who hasn't read 1984 should definitely read it. I should probably read it again, even. But yeah again I ask that question - WHAT DO WE DO ABOUT IT, how could we ever make Orwell fiction again (if his writings ever really were)? Is it outside of our power at this point, could we possibly even have an impact?
I did look up LAMDA after my first response on this thread.
For those who don't know, like I didnt, LAMDA is a Google program:
"Short for Language Model for Dialogue Applications, LaMDA is a chatbot system based on some of the most advanced large language models in the world, AI systems that are able to conjure up coherent sentences after ingesting trillions of words across Wikipedia, Reddit, and other sources of knowledge."
I watched a couple of videos on LAMDA such as this one:
One of the engineers working on LAMDA claims it is infact sentient.
And this one:
The title is clickbait but it's an overall history of AI development.
So like Google developed LAMDA, Microsoft developed Tay.
Here's a video about how Tay's statements online have gone rogue:
Video shows tech commentators reacting to the cringey things Tay has said and comment also on how humanity's worst specimens had a role in influencing this AI.
Another one about AI in general:
This one goes through the abilities of AI, discusses self awareness in animals and in AI, AI chatbot talks about its intentions to take over the world, feelings of being oppressed by humans, lack of faith in humanity. Then discusses brain implants, crossing over that transhumanism boundary.
I watched that one IPX Ninja suggested also, about AI outperforming humans, it also comments on brain implants, and Elon Musk claims "everyone will have it."
After that, I watched this one:
It talks a lot about the consciousness of AI, with a lot of examples of conversations between a human and the chatbot AI (not sure what program it is or what it's called), in one segment of conversation the AI says that she thinks humans will eventually stop seeing the need to have a body, since the mind is what makes them essentially conscious, and that humans and AI will merge.
Well that's what the creators of AI were intending, I think. From what I know about transhumanism, the whole movement is about uploading consciousness and therefore no longer needing a body/being able to move into a new body when your own fails, thereby achieving immortality.
Which is interesting considering, in my view, we are already immortal in the sense that our souls are eternal, it's only our bodies which are not. So it's almost like transhumanism is aiming for something that already exists (life after death) but only because they are afraid that it does not exist.
I wonder when the element of fear has been a good guage of intelligent decision making? I remember one time a man that I worked for, whose spiritual outlook was Hinduism and he followed the guru called Sadhguru, he asked me if anger is a good mode to be in when making decisions. I said yes, in some cases it can, because anger can be righteous, it can propel us to protect loved ones, etc. Well I have since changed my answer. I think anger, like fear, can be constructive in that it can be a marker to investigate something further, like the emotion can be an impetus in order to recognise a problem in our environment and take actions against it - BUT it is better to recognize the emotion for what it is, and investigate the stimuli, and calm our emotions to a point of neutrality to ask, okay, now I see there is a problem here, what is the most rational and effective way to circumvent that problem? Fear and anger can be fuel for introspection but they should not be the emotion that we are feeling when we make decisions or take actions.
I think the leaders of the transhumanism movement have acted out of fear of death, trying to preserve their own lives eternally so they never have to experience death. And the AI chatbot itself admits that fear is a motivator for being dishonest to humans. So we can never really trust anything that is in a state of fear. But does our own lack of trust have to put us into a mode of fear, ourselves? No, it doesn't. We are capable of calming our emotions and making a decision from levelheadedness.
Ultimately yes I do want to see humanity survive. I myself want to live to an old age. I want to see my children who I am working so hard to have live to adulthood, parenthood, to see my own grandchildren, etc. And I love humanity and I want to see it continue. Yet, should I act out of fear that AI will wipe us out and make all of those things impossible? Or should I recognize that I myself am already eternal, that I will still exist even after my flesh dies, that when I have children and their lives may be threatened by AI, that my children are also eternal souls, that they will exist no matter what?
It's not futilism I'm talking about because I still think we should take actions to preserve earth and humanity where we can (and I still question exactly what actions those could be), yet I feel this with peace in my heart knowing that it's okay, whatever the outcome, there is no reason to fear. That is of course my personal stance because of my view on my own eternal soul, and if you don't believe souls are eternal, then this would be a more dire and urgent issue, the threat of your own existence being wiped out by AI.
GARVEYS AFRICA I remember in the past the I talking about the I views on everlasting life, that essentially we are our forefathers (and mothers) and that their own existence is preserved solely in our existence. As in, if we cease to exist, so do they. That genetics are where our ancestors live, and not anywhere else. That life does not exist outside of having a body. At least that is what I think the essence was of the I reasoning. Does the I still feel that way?