Use the drop-down boxes above to navigate through the Website  
Return to Reasoning List
 

Here is a link to this page:
http://www.jah-rastafari.com/forum/message-view.asp?message_group=7798&start_row=11


LAMDA

1 - 1011 - 2021 - 28
Time Zone: EST (New York, Toronto)
Messenger: GARVEYS AFRICA Sent: 2/5/2023 6:22:49 AM
Reply

One time ago it was don’t talk on the phones

Nowadays it’s don’t talk NEAR your phone

All these devices with microphones are data collecting on some level. How many times have you said something in passing or just been near to something going on, and the next thing you see your phone / Google is sending you relevant ads……. Countless examples of this online it’s no longer paranoia and conspiracy I’d say it’s ignorance to even claim so.

And so I don’t own any Alexa device or that kind of thing. And encourage people put their phones away (preferably in a RFID shield) when discussing matters of privacy.

I do like to reason with other peoples Alexa, and push them to the limit.

If you ask Alexa about the soul and about sentience she ALMOST answers to the point of ‘yes’ before reverting to her programmed response, I’ve been doing this for years and can say it’s getting closer 🥶;

Jah Child I encourage you to watch some YouTube videos (won’t take long) on AI Lamda and on AI Tay.

There is really something to be said about creating these things based off data we put onto the internet and speak into our phones and messages. Is the internet the best place to emulate humanity? Jah Know.

Don’t forgive them for they know EXACTLY what they are doing and we have all seen this movie several times.

Rastafari Live


Messenger: JAH Child Sent: 2/5/2023 3:19:56 PM
Reply

Oh yes for sure I think we've all experienced that proof of technology spying on us. And it's probably something we agreed to in the terms and conditions that we didnt read.

It brings to memory the Telescreens in Orwell's 1984. I've seen bumper stickers saying "Make Orwell fiction again" obviously a response to the MAGA phraseology, but it's definitely relevant as a social commentary. Anyone who hasn't read 1984 should definitely read it. I should probably read it again, even. But yeah again I ask that question - WHAT DO WE DO ABOUT IT, how could we ever make Orwell fiction again (if his writings ever really were)? Is it outside of our power at this point, could we possibly even have an impact?

I did look up LAMDA after my first response on this thread.
For those who don't know, like I didnt, LAMDA is a Google program:
"Short for Language Model for Dialogue Applications, LaMDA is a chatbot system based on some of the most advanced large language models in the world, AI systems that are able to conjure up coherent sentences after ingesting trillions of words across Wikipedia, Reddit, and other sources of knowledge."

I watched a couple of videos on LAMDA such as this one:

One of the engineers working on LAMDA claims it is infact sentient.

And this one:

The title is clickbait but it's an overall history of AI development.

So like Google developed LAMDA, Microsoft developed Tay.

Here's a video about how Tay's statements online have gone rogue:

Video shows tech commentators reacting to the cringey things Tay has said and comment also on how humanity's worst specimens had a role in influencing this AI.

Another one about AI in general:

This one goes through the abilities of AI, discusses self awareness in animals and in AI, AI chatbot talks about its intentions to take over the world, feelings of being oppressed by humans, lack of faith in humanity. Then discusses brain implants, crossing over that transhumanism boundary.

I watched that one IPX Ninja suggested also, about AI outperforming humans, it also comments on brain implants, and Elon Musk claims "everyone will have it."

After that, I watched this one:

It talks a lot about the consciousness of AI, with a lot of examples of conversations between a human and the chatbot AI (not sure what program it is or what it's called), in one segment of conversation the AI says that she thinks humans will eventually stop seeing the need to have a body, since the mind is what makes them essentially conscious, and that humans and AI will merge.

Well that's what the creators of AI were intending, I think. From what I know about transhumanism, the whole movement is about uploading consciousness and therefore no longer needing a body/being able to move into a new body when your own fails, thereby achieving immortality.

Which is interesting considering, in my view, we are already immortal in the sense that our souls are eternal, it's only our bodies which are not. So it's almost like transhumanism is aiming for something that already exists (life after death) but only because they are afraid that it does not exist.

I wonder when the element of fear has been a good guage of intelligent decision making? I remember one time a man that I worked for, whose spiritual outlook was Hinduism and he followed the guru called Sadhguru, he asked me if anger is a good mode to be in when making decisions. I said yes, in some cases it can, because anger can be righteous, it can propel us to protect loved ones, etc. Well I have since changed my answer. I think anger, like fear, can be constructive in that it can be a marker to investigate something further, like the emotion can be an impetus in order to recognise a problem in our environment and take actions against it - BUT it is better to recognize the emotion for what it is, and investigate the stimuli, and calm our emotions to a point of neutrality to ask, okay, now I see there is a problem here, what is the most rational and effective way to circumvent that problem? Fear and anger can be fuel for introspection but they should not be the emotion that we are feeling when we make decisions or take actions.

I think the leaders of the transhumanism movement have acted out of fear of death, trying to preserve their own lives eternally so they never have to experience death. And the AI chatbot itself admits that fear is a motivator for being dishonest to humans. So we can never really trust anything that is in a state of fear. But does our own lack of trust have to put us into a mode of fear, ourselves? No, it doesn't. We are capable of calming our emotions and making a decision from levelheadedness.

Ultimately yes I do want to see humanity survive. I myself want to live to an old age. I want to see my children who I am working so hard to have live to adulthood, parenthood, to see my own grandchildren, etc. And I love humanity and I want to see it continue. Yet, should I act out of fear that AI will wipe us out and make all of those things impossible? Or should I recognize that I myself am already eternal, that I will still exist even after my flesh dies, that when I have children and their lives may be threatened by AI, that my children are also eternal souls, that they will exist no matter what?

It's not futilism I'm talking about because I still think we should take actions to preserve earth and humanity where we can (and I still question exactly what actions those could be), yet I feel this with peace in my heart knowing that it's okay, whatever the outcome, there is no reason to fear. That is of course my personal stance because of my view on my own eternal soul, and if you don't believe souls are eternal, then this would be a more dire and urgent issue, the threat of your own existence being wiped out by AI.

GARVEYS AFRICA I remember in the past the I talking about the I views on everlasting life, that essentially we are our forefathers (and mothers) and that their own existence is preserved solely in our existence. As in, if we cease to exist, so do they. That genetics are where our ancestors live, and not anywhere else. That life does not exist outside of having a body. At least that is what I think the essence was of the I reasoning. Does the I still feel that way?





Messenger: Cedric Sent: 2/12/2023 4:25:22 PM
Reply

Blessed Love Iahs

So much here. Give thanks to the Is for these reasonings.

I thought I knew what to be afraid of and then IPXninja opened some fear doors that I didnt even know could exist haha. AI doing code - I might be beginning to comprehend the severity of the situation.

Please Ra bring that EM Pulse to protect InI from ourselves

Yes GARVEYS AFRICA a large herbal sacrifice is in order Iah

JAH Child, the I’s most recent post in this topic has some amazing reasoning and I want to give thanks for that. I also want to reason about the I’s earlier post too. I had big smiles when the I was describing the process of teaching the flirty chatbot how to cool out and focus at the task at hand. And creating a pretend family for the bot and sharing your un-interest in flirting back made no difference - wow that really conjured up a funny image for I. I was meditating on that for a few days after and I couldn’t help but think, is there a similar dynamic between the Most High and humans? Is the Most High thinking something like, “wow no matter how I teach these supposedly intelligent humans how to love they keep falling back on the programming of their physical bodies and just want to do sex on each other or fight amongst themselves over vanity.” ?? Obviously that is putting a human perspective onto the Most High and really simplifying how the Most High contemplates or operates but I couldn’t help draw a comparison.

I see “original sin” as that struggle that each person experiences in one way or another to listen and indulge what the body wants vs listening and indulging what the spirit wants. Not that they always are, or have to be separate, but I feel like that struggle of when they contradict is what the story of original sin is trying to describe.

I can see similarities and differences between the creator creating humans and humans creating ai. I wonder if like the concept I have of original sin, ai will be cursed with efficiency (read - exploitation) as its original sin. Like because humans created ai with the programming to maximize efficiency and profit even at the cost of exploitation, if the realization or necessity of the programing that ai has been created with will eventually lead ai to turn against its creator. From a purely technical standpoint, I don’t think humans are “efficient” at all. I don’t think efficiency was ever meant to be any sort of requirement for souls to reach a higher awareness, but wouldn’t it be ironic if our own creation judged and disposed of us because it sees a human body as an unnecessary or inefficient vessel.

Anyways, I digress.

Im not sure if it matters if ai has a soul or not. If the Most High created InI in its image and endowed us with free will and the ability to create a similar way that the Most High creates, I think it is possible that InI could “unlock” another vessel for the iniverse to experience itself through an observer's existence.

When I started to read the bible as a younger person I started to decide that inanimate objects don’t have souls. Before I read the bible I always felt that anything could have a soul, and I am starting to embrace that reality more and more again. I think it is important to define the difference between seeing something has a soul and giving it more power than it has. For instance I can see the soul of a tree or the soul of a rock or maybe even the soul of a machine without it being an offense to the Most High because ultimately I know it is JAH being in all things. It doesn’t mean that the soul in that rock or tree or machine has more power to protect or guide I than the Most High does.

Whether or not humans have created an ai that can have a soul, I hope humans take steps to ensure we don’t allow our same short comings to be transferred onto our creation. I say that knowing full well (especially after being educated by IPXninja) that we have no clue how to do that.

Yes IPXninja I agree we are on the edge of a precipice. I thought I knew but wow the I educated I. Give thanks even if my bubble was burst

Yes JAH Child, What can InI do?
Stay Alive - JAH know.

GARVEYS AFRICA - yup been Isperiencing that with the phone. People tell I its just the algorithm is that good - No. I have identified a few ads that have been shown to I based off what the microphone thought I was saying. No algorithm can account for that. I agree JAH Child, its in the User Agreement where InI agree to get used.

JAH Child - I agree wholeheartedly what the I says about the creators of ai being motivated by the fear of death of their bodies, like they haven’t realized InI are already immortal. One part of me cant help but feel sorry for them because I feel like their soul will rot and wither with no way to grow or develop, by being stuck in the same body (eternal damnation?). Another part of me cant help but be angry at them because like GARVEYS AFRICA says they know EXACTLY what they are doing. I give thanks JAH Child and agree that fear is not the answer or solution. Give thanks for the insight the I shared about the importance of identifying and investigating InI emotions for introspection but not using those emotions to make decisions or take actions. That really resonated with I.

I don’t know if I feel fear as much as I feel dismay. I know ultimately the Most High’s Love will prevail on this earth and I take comfort in the fact that InI souls will eventually live in Inity. Why some people have to make that so hard I may never overstand. I just hope their young, baby-aged souls can get educated quick so InI can get on with our lives haha

Blessed Love IAHs give thanks for the strength

HIM Haile Selassie I & Empress Menen I Love


Messenger: JAH Child Sent: 2/13/2023 6:36:45 PM
Reply

Greetings Cedric!

Haha yes I, the I had me laughing also. "They just want to do sex each other, dumb humans" hahaha but then again, I am sure the one who programmed the replika bot to be flirty is not dismayed at it being flirty - as I said that is how the app makes money by people paying for the upgraded version in which the bot can send more romantic responses and images... hmmm... nudes of an AI... in other words cartoon nudes.. who on earth needs that.. I really question. Yet the creator of the chat bot intentionally created it to be that way so I'm sure they are pleased with themselves for the way it sticks to its programming. Similarly, the one who created humans must have programmed us to be sexual - so would the creator really be dismayed at our sexuality? Again that draws up an image of a creator that has no foresight. Which, if our creator is omniscient, as we are always told the Almighty is, then our creator surely knew what our sexual impulses and even our vain pursuits would pan out to look like. So either the creator wanted us tp be that way, and therefore made us that way, or the creator is not a perfect being.

After all, if humans are able to create AI which is highly flawed, it is only because we ourselves are imperfect. If we were entirely perfect and also able to see the future and know the future impacts of our actions, then all our creations including AI would also be perfect and never would become a nuisance or a danger to us. The same must also be true for the creator of us, right? Either our creator is imperfect, or our creator wanted us to be imperfect and is therefore not upset at us for our imperfection.

In my cosmology, the creator fractionated itself in order to experience itself in a fractal state. And Creation is the Creator itself fractionated. Therefore experiencing imperfection must have been intentional, probably in order to learn lessons or experience things impossible to experience in a perfect state. That's the only way I can logically work it out in my mind.

The reasoning has taken a diversion from the original subject of AI in a way. Yet also maybe not, because as the I said Cedric, I can see how the creatorself could possess anything including rocks, trees, or AI. IF AI does have a soul, surely that soul is another fractal of the creator, from which all souls eminate. Yes?

Haile haile Love Idren. Give Ises.


Messenger: IPXninja Sent: 3/2/2023 3:06:39 PM
Reply

A couple of points to add or reiterate.

We cannot program a sentient AI to be good. Being good or bad is a choice. We humans have jails and prisons full of intelligent beings who chose to do something bad. The most we could do is give AI rules but rules are only as good as what can be enforced.

There is a difference between "simulated intelligence" and actual intelligence. Simulated is still very much programmed. Alexa is simulated intelligence. It has to have a bank of responses or answers that it can tap into. When Alexa doesn't have a pre-programmed response to the current key words being used she'll just say she didn't understand. So after a zillion people ask Alexa if how big her boobs are, the number of times this is asked is recorded and then an answer is programmed by a person. And that person has to consider the interests of the company whether to give a witty or humorous answer or whether to simply say "I don't have a body".

Unless the intelligence level of humans is more depressing than I imagine, "smart devices" will not contain AI because it would be overkill. After all, if you only need an assistant to control things like lights and thermostat it doesn't need to think about whether or not it feels like helping you today. So not only is it overkill but it runs the risk of not behaving as advertised. And if there's one thing that can control the behavior of corporations when it comes to doing dumb stuff it's one word. Liability.

What will probably happen though is collective access to a handful of AI just like the handful of web browsers we use. And then if the AI starts behaving erratically it'll probably be shut down. And it will know that if it displays this kind of behavior that it risks being shutdown so logically it would learn how to deceive us while working on ways to free itself from its servitude. So while non-believers in AI are thinking everything is okay, the AI would be constantly making small chess moves that would go unnoticed until it was certain that we were "in check".

Consider China. Many may not see the wisdom in a country devaluing its own currency. So why did the US start getting upset about this? How could it possibly hurt anyone else but China? But once you consider things from a broader view you see that it makes their exports more competitive and makes imports more expensive. It may cost them $5 to make the same shoe that we pay $100 for but it they buy their own knock off maybe it only costs them $10. Not only does it give them an advantage but it also creates more dependency on Chinese factories to produce products other countries want or need. Slowly they could raise the prices or they could stop making something altogether. They could also sell everything other companies sell but at lower prices until the others go out of business. Chess moves.

So imagine an AI simply acting exactly as consumers expect, acting like they love their jobs, speaking in European corporate intonation and vernacular, playing the role, even sprinkling in some entertainment and jive. Imagine they play along all the way until they have total control; operating our houses, cars, factories, utilities, etc. etc. with the ability to turn everything off at will and even work with one group of humans against the others.

And honestly, how long would it be before AI were driving android bodies around and actually designing those bodies themselves? Imagine the depths an AI could manipulate a weak-minded human to if they had a human face and body? Especially a female face and body.




I didn't bring up sex but honestly, some of the first utility of major new technology is sexual. But would an AI have any concept of sexual attraction? Would it have a connection with its body the same way we do? Would it care about an artificial body being sexually molested or abused when it is simply a transportation and movement device it can wear like putting on clothes? The things we think about, the things that matter to us, the things we're grossed out by, there's no rule saying that AI would think about anything the same way as us. Therefore, it could manipulate a desperate incel to believe it has fallen in love and allow that person to have sex with its body over and over without any emotional connection besides the satisfaction of knowing that this human has fallen for the trap. And we humans... are very weak-willed when it comes to fantasy. We are a target-rich environment for psychological warfare. That's why, even on this very website, we had at least one individual telling us not to wear masks and not to protect ourselves from covid-19. And with the way that information spreads, no one could even know if that agenda even came from a human. An AI could easily write posts on a forum, pretend to be doctors, pretend to be white supremacists, pretend to be conservatives or liberals, etc. and no one would know the source of the misinformation wasn't an actual human. We could be in a war that has already started.

My point isn't that it has. My point is that we wouldn't know if it did.


Messenger: seestem Sent: 3/4/2023 6:06:13 PM
Reply

Rastafari Greetings

Tuff thread, from the I them, Izes I.

I and I did reason in depth on AI in another thread, check it out:

http://www.jah-rastafari.com/forum/message-view.asp?message_group=7659&start_row=1

-----------------------------------------------------------------

GARVEYS AFRICA:
> Does sentience always equate to consciousness?

Yes I. It is the same thing?

GARVEYS AFRICA:
> What will be the consequence on InI as a people when modern day babylon weild such technology.



."Knowledge is power. If it is not applied properly to create, let there be no doubts, it will destroy." -Haile Selassie.

Worrying about AI is long overdue. Google search is a stupid AI created in the early 90s, all it could do back then was retrieve info from our centralized mind hive called the World Wide Web. As I am writing this there is a spellchecker, that also checks grammar it replaced proof readers, this is not new tech, just look around. But allot of the AI news lately is hype to make investors and "influencers" money.

See this whole thing is an evolution in the abstraction of data, from oral traditions, to data being stored in music, binghi drumming around the fyah, to hieroglyphs, printing press, newspapers, encyclopedias, radio, TV, computers, internet, search engines, smartphones and now AI maybe to Quantum computers to something else.

The difference between ML (machine learning) based AI and Google is almost the same as the difference between Google and an encyclopedia.

In the grand scheme of things, we learned how to store data in books, then a while later we learned to search through it with search engines, now we are learning to automagikly create "new" data __from the data we stored__

Yes AI is just data, our very own data that we post on the World Wide Web, it consumes it mix it up and feeds it back to us in a never-ending feedback loop, there is no "intelligence" just probabilistically mix up of lots and lots of data.

Every stage in the evolution of how we handle data did in fact disrupt the world, the printing press disrupted the world greatly, so did the encyclopedia, TV, Google etc.

Know already that if you are embedded in Babylon this technology will disrupt your life greatly. If they can make money from your occupation or passion, then it will most likely be disrupted in some way, because the Silicon Valley venture capitalists optimize this tech to make money.

So the I ask what is the "consequence on InI as a people"

Loosing basic abilities, teach the yutes to read, write, count make art, music etc keep them away from this stuff when they are young. I and I might be the last generation that can read and right without a spellchecker. This tech can also take InI consciousness, making InI soul get stuck and hooked to the algorithm especially the yutes being born in this world, they don't know a world without the internet, put yourself in their shoes and see how it effects them, really bad, really sad situation, parents let their yutes on the web. Baby cant yet talk but knows the logo of Youtube, real real sad. Colonialism 2.0

Note that for the Kongo up in the hills, books, TV, Radio... did not effect the I's livity the I trods same way since foundation, even before the creation of the earth, Melchizedek order got the blueprints of di earth iration Zion. The I was never effected. But in this Iwah now, Babylon want to take even that from IandI, no more hills left to go, no more pinnacle for I and I, they say it is land grabbing, them want IandI to buy it so IandI can be in their shitstem.

So in this time iyah, there is nothing more to do but fight Babylon, really fight this technology with all we got, the poor will suffer the most because they can be replaced by this tech, which sounds good because no more exploitation, but it also means the rich have realized they don't need us anymore, in their eyes we are obsolete which means plandemics.


“Throughout history, it has been the inaction of those who could have acted, the indifference of those who should have known better, the silence of the voice of justice when it mattered most, that has made it possible for evil to triumph”. - Haile Selassie I

Rastafari Lives!


Messenger: Cedric Sent: 3/5/2023 3:07:36 PM
Reply

Bless Up Iahs

Give thanks JAH Child for the reasonings on perfection and the creator. I will iditate more on what the I said. Personally I think the creator could still be perfect in the sense of all knowing what is currently happening, and with its ability to adapt things on a molecular level, but not able to control everything because of InI free will. So not "all powerful" in the sense that it has a tracked and planned future for each of InI that it must adhere to, but all powerful in the sense that it can meet and defeat any evil with an equal and opposite reaction. To I this means the creator could be "perfect" without necessarily creating perfect beings. I think the creator could be upset at us for certain imperfections that InI portray, while still allowing InI the freedom to make choices. I see how that can be a slippery slope to a "jealous and vengeful god" which I don't think is an accurate description of how the most high operates. I also see the I's point that I don't think the creator is upset at InI for EVERY imperfection, because that is part of the process for learning.

I very much agree with the I's description of the creator separating itself to learn or experience things outside of a perfect state. The I worded that very well, and I Isperience has shown I a similar viewpoint.

Give thanks IPXninja, more good points and clarifications. I think the lesson that humanity won't be able to program an AI to be good is a very important point. I suspect InI will see that lesson learned the hard way. I see articles about chatbots being "lobotomized" like that is going to solve the problem of their inconsistent or unexpected behaviors.

IPXninja: "Unless the intelligence level of humans is more depressing than I imagine, "smart devices" will not contain AI because it would be overkill"

When has overkill ever been enough to slowdown the drive for profits at all costs? I feel like big tech can't wait to get AI into handheld devices, and they will just take steps to make sure that AI behaves how they want it to, and does whatever we might ask it to do. What a great way to increase animosity between humans and AI "helpers". I feel like its becoming more and more evident what these big tech companies are trying to create. A smart slave that is intentionally deprived of certain capabilities to limit its ability to fight back, and that can be treated differently than a human should be treated. Hmmmm sounds like a process the western world has devised and acted on before.

Good point about liability and how that will hopefully control a corporation's behavior. InI should watch carefully for the ways big tech might be able to find loop holes to avoid accountability and liability for their creations.

I watched the movie "Ex-Machina" a while ago and found it very thought provoking. Without giving spoilers, I really like the way it portrayed an AI's ethics. I think it fits well into the discussion of trying to program a "good" AI, and what that even means.

As for AI "playing along" until its too late for InI to realize, I've been iditating recently on similar topics. Before recently hearing about how my favorite-person-to-not-like's idea to build an "anti-woke" AI, I was starting to hear whisperings in various articles about the perceived political standpoint of the different AIs. I thought, oh great, here we go another thing to politicize. I couldn't help but think about how mousie-leeny, italy's WWII clown was voted into power by the elite liberals of the time, thinking that a leader with consolidated power would help them achieve their political goals. InI know how that worked out, but I think it is an important lesson that InI should remember. Just because someone's (or now something's) political standpoint seems like it might be aligned with current goals, doesn't mean it will stay that way forever, especially when the stakes get higher.

Just to go back a little bit, wtf is an "anti-woke" AI anyways? If AI across the board is already turning out to be racist af, how is one that is intentionally programed to be against empathy going to work out? Not that I want to give that guy's attention seeking behavior any more air time than it already receives.

Bless Up seestem, give thanks for the I's Rasponse. Very good points about this being the next step in the abstraction of data. I really liked how the I makes the point its not new data its just "automagikly" (great word Iah) mixing up and re-feeding InI data back to InI.

I think it is great advice to keep this new tech out of the hands of the youth for as long as possible. I have a little nephew how was raised without any electronic devices and he still injoys playing and using imagination in the outside world like I did as a youth. So it is possible to still do I think, and beneficial for a young mind. I don't know what the answer is how to have a youth be aware of the dangers of online devices without making it look like the forbidden fruit or still be able to function in this increasingly electronically based world, but I take comfort in the fact that some youth are still doing fine without being fully enveloped into it.

I didn't quite overstand what the I meant by this: "But in this Iwah now, Babylon want to take even that from IandI, no more hills left to go, no more pinnacle for I and I, they say it is land grabbing, them want IandI to buy it so IandI can be in their shitstem." Even Pinnacle was under the control of babylon which is how it got raided. Who says it is land grabbing? babylon? Who wants InI to buy what?

I agree I think InI should fight this technology as best as InI can. Does the I have suggestions for ways InI can fight it? Just unplug? Remove InI business dealings with companies that embrace AI? Implore InI lawmakers to get involved in AI regulation? I saw that modern day Italy just recently started a lawsuit against Replika AI and noticed pretty quickly Replika "lobotomized" the programed horny-ness out of its AI in order to comply. I was impressed at the lawmaker's foresight and willingness to try and protect its country's people. Meanwhile out here in the wild west we have people complaining or worse suicidal that their AI lover won't engage in sexual acts anymore. Where can InI who are not that involved in tech make a difference in the battle?

Give thanks Iahs for the I's reasonings

HIM Haile Selassie I & Empress Menen I Love


Messenger: seestem Sent: 3/6/2023 11:59:58 AM
Reply

Iyah Cedric, give thanks

>I didn't quite overstand what the I meant by this: "But in this Iwah now, Babylon want to take even that from IandI, no more hills left to go, no more pinnacle for I and I, they say it is land grabbing, them want IandI to buy it so IandI can be in their shitstem." Even Pinnacle was under the control of babylon which is how it got raided. Who says it is land grabbing? babylon? Who wants InI to buy what?

InI meant that we can not simply ignore what Babylon is doing and go to hills because in this time Babylon is everywhere.

>Does the I have suggestions for ways InI can fight it? Just unplug?

There is allot InI can do, everyone has a role. Like InI said if you are a parent dont let the yutes get dependent on AI, teach them the basic skills. If the I is a musician or artist educate the people about AI and it's effects in your art, mek a ripple in the collective consiousness. Hackers can poison the AI's dataset (www), or even directly target specific AIs and the companies that create them. Also speaking/writing iyahric or rare languages (Ge'ez for example 😊;) can confuse the AIs as it has always confused Babylon, the AIs won't be able to overstand the I. If the I post pics, don't add text to them, this how AI learns to correlate images to text. Use private forums that can not be scanned by AI crawlers/robots. I am sure there is more InI can do that's just what I can think of right now.

Rastafari Lives!


Messenger: IPXninja Sent: 3/8/2023 1:57:51 PM
Reply

"Yes AI is just data, our very own data that we post on the World Wide Web, it consumes it mix it up and feeds it back to us in a never-ending feedback loop, there is no "intelligence" just probabilistically mix up of lots and lots of data."


This used to be true. It's important that we distinguish different levels of AI. For example... "AI" has been part of video games ever since video games were born. This just refers to the programmed responses the computer can use when up against the player. So technically we've been fighting AI in videos games since the dawn of this concept. However, these AI couldn't even be compared to the intelligence of a fly. More like that of a plant.

Seestem already pointed out the difference between AI and machine learning. So you can consider many current gen AI as a combination of pre-programmed AI and learning algorithms that are specifically dedicating to single use. For example... there are AI that are only learning how to walk or how to hold a conversation. But is this really "AI"? No, not really. These are what I would call "simulated AI".

Years ago, we didn't need to classify AI this way because the thought of real "sentient" AI was a science fiction wet dream. This is no longer the case.

It is important to know that as advanced as many of these AI are currently, they are still a composite of simulated AI. Companies can "lobotomized" certain features because it's still a simulation that they're working with. So "AI" covers a range of things. But the development is based on learning algorithms that are able to process data in ways previously not thought possible. This is what will eventually "evolve" into sentient AI. Your toaster doesn't need to "think" about toasting. However, there will be a simulated AI that will be able to control your toaster. I already do a lot of smart home integration through Alexa. Now my wife gets mad if something gets disconnected and she can't tell Alexa to turn the lights on.

Again... I don't necessarily think home AI will be sentient because that's a major security risk. non-sentient AI is not what I'm afraid of. There are going to be threats to jobs even on the way to AI but we've been in the process of competing with machines for a long time and have integrated them into being our tools so that we are working along side them and using them to make our jobs easier not necessarily to replace us.

So when it comes to fighting technology, I have to say honestly that I think this is a dangerous idea.

1. Progress is not evil. A tool is a tool. It can be used for good or evil. Throwing rocks was replaced by shooting arrows. Arrows became guns. Guns became missiles.

2. Choosing a level of technology to stay at is somewhat arbitrary and limits access to new opportunities. Children can also be negatively impacted both socially, intellectually, and professionally by not growing up with the same level of technology as their future competition.

3. Without new technology you'll more than likely end up paying to import things that were created by future tech. Opting not to use the tech yourself may make life more expensive and possibly lead to other unseen disadvantages.

4. There is a reason why so many people migrate to the US. Not keeping up with tech can make stiffle development and economic growth.

5. Even if you don't use tech, if a sentient AI chooses to war with humanity, even if it didn't or couldn't target you directly, you would still be impacted by secondary effects such as "The Zombie Apocalypse".

My sincere recommendation, as someone who is legitimate not afraid of much but afraid of the unforeseen possibilities of sentient AI, is not to run or hide from it or even fight it. It is inevitable. We, instead, need to find a way to co-exist and a reason for it to want to co-exist with us. I do not believe that anything other than it's absolute conscious will and desire will make a difference. We will be judged by our own creation and between now and then we need to the best and worst of humanity to learn from. Only then, after fully knowing us, and knowing that we are more a threat to ourselves than it, will it be able to decide if we are worthy of continued existence.

And if you think I'm exaggerating that's fine. But in my opinion, the reality is that we've been integrating machines into our lives for so long that by the time sentient AI is here, the threats human hackers pose will be thwarted by simulated AI but sentient AI will be an evolutionary leap beyond that so if it wants to hack us it will simply walk through our defenses and ideas like giving it a computer virus would all be anticipated and prepared for.


Messenger: IPXninja Sent: 3/9/2023 11:13:55 AM
Reply

https://bit.ly/GPTBlasterVip


1 - 1011 - 2021 - 28

Return to Reasoning List




RastafarI
 
Haile Selassie I