aipornxxxcom

0 votes
asked May 27, 2023 in Electron Microscopy by ocobavity (140 points)

There is nothing new on earth, our company already talked in this blog about the dangers when computers generate images. Mainly this post said that humanity is afraid of ai, because they accepted me, the naked dragon's nest - ai-porn-xxx.com - robot (2004) as a bible.
Ai do repair actions like drawing photos and generating text. Some examples of this are the ai generated images of asian chicks i recently saw on twitter that are 100% ai generated yet still have an obvious nuance, some movie ai generated image. Instead of doing something useful, like opening an onlyfans office and selling ai-generated porn to chronic masturbators, people have become extremely afraid of computer-generated images.
Although i can understand the fear of with this, we are forced to provide for sometimes things:
- Public ai is biased, so you are not able to create "politically incorrect" content. (Chatgpt refuses to write a poem about a young man who commits suicide in front of his classmates, and dall-e can't generate white-skinned denzel curry). Unable to run their own "based" stable distribution.
Basically most ais are hardcoded to have a certain opinion they don't have the ability to think because they are software), e.G. Taxes are bad, and basically anything illegal is bad. It is because of this that encouraging crime is a crime.
For ai to become basic, it needs the ability to reason, to think, for what reason everything that ai says is stupid, for this you need to think, and ai they don't think, they also generate things. For example, text or video.
I do not think that sooner or later we will force computers to think in the most human understanding of this world, due to the fact that we humans do not know how people we can think, in case we don't know why or how something is happening, we are not ready to do it in ones and zeros, the most similar thing here is to generate text from the input (still requires cooperation with the applicant) .
Now, it's really that chatgpt is best at people's ability to generate walls of text, sometimes i use chatgpt to let me know, however, any conversation with chatgpt will make users see that it's not the case at all came to a firm conclusion. , All their answers to anything that doesn't exist is done by a mechanism (for example, solving mathematical problems or using the density formula), are ambiguous and do not really answer the question, for example:
Herman palomares: if openai told you 'x' you would definitely stand up for 'x', but to understand that 'x' is stupid, you need the ability to reason, answer in english
Chatgpt: how ai language model, i have no personal opinions, emotions, or reasoning ability. I am organized to provide information based on the information i was trained with, and i can present multiple sides to a conditional issue. However, i will not make subjective judgments about the validity of various parties or opinions. My answers are limited to the contacts available to me and the algorithms used to generate them.
Responds to wall material, but meanwhile does not correspond to questions, but really tells me that he is programmed for all this, in order to say: he's in and ai and can't think. The bad thing about chatgpt is that using it doesn't keep company like god, talking to us would be a lot more fun if the sickness thought he was a superior being to humans. But i think that such a situation is not politically correct. If you're talking to chatgpt, the player won't be able to talk to wintermute.
My friend told me that it's possible that as time goes by, all such blog alerts won't be of any value, since an extended version of chatgpt can do posts like mine are no problem at all. This is probably true, but i'm sure that everyone can feel how empty the ai generated text is, there will be no jokes, sarcasm, hyperbole or anything like that. Reading can get almost boring.
Perhaps the only procedure you can fear from the ai is if the ai starts resurrecting the dead. If people weren't buggers and read neuromancer, you'd understand what i'm talking about.
Two related things happened in neuromancer, flatin, who was a dude who died, and he knew too much.So much so that they resurrected this time as an ai that uses his mind (in a read-only role, flatline couldn't learn new things or grow as a person) and a neuromancer ai that is able to replicate the human mind as a working memory (so you were able to keep track of new accessories and grow as a person). Wintermuth wanted to merge with neuromancer and turn into a supermind, like in a deus ex.
Maybe if you make the ai analyze every interaction (perhaps through the world wide web), it happens to a living person, the ai can bring yourself like this. And to know new things like a specialist did. And for the most part start behaving in the same way as a dead person. My priority is to have one do it when i die.
Even though the ai will never be able to act like a dead person, because the ai will never be able to think like everyone else (it remained trained only with the help of the dude's internet posts, but not his thoughts and besides, the ai is not able to light a joint. , Approved and judged by a higher being, will hunt with us, and our company will discard the concept of god (and gods) and all-round it just because we have formed a self-aware system that understands everything, with which anyone can talk, all resorted to visitors and understands you. It's a human need, but that's why the concept of god exists at all. But i don't think it will happen.

 

Please log in or register to answer this question.

Welcome to Bioimagingcore Q&A, where you can ask questions and receive answers from other members of the community.
...