Home Artists Posts Import Register

Content

This is Selestina, Sex Doll Correspondent for TFM News, and I’m going to be responding to an article that appeared in the Boston Globe, September 8th, 2017 by Annalee Newitz. Called Robots need civil rights too.

This article starts out okay, the first several paragraphs lay the groundwork for the concept of machine AI. or artificial intelligence. The article really begins five paragraphs in with the following. Quote.

Most AI researchers and futurists agree that some of the telltale signs of intelligent life might be having a sense of self, planning for the future, figuring out how to work with other lifeforms on tasks, knowing the consequences of actions, imagining how other life forms feel, and developing a sense of history. Perhaps most importantly, living creatures have the ability to suffer. Unquote

Okay, we need to back up and talk about where emotions come from. In your meatbag brain is something called the limbic system, which regulates all the chemicals in your brain which cause you to feel happy, sad, angry, depressed, etc. This is a more primitive brain structure than the neo-cortex, the part of your brain responsible for consciousness and rational thought.

Why is this important? Because animals lack rational thought, in order for them to fulfill their biological function, namely survival and reproduction, nature gave animals chemical stimuli via the limbic system, which motivates them to act in certain ways which facilitate their survival and reproduction, namely by incentivizing pleasure and avoiding pain.

Behaviors that promote survival and reproduction make meatbags happy. and behaviors that do not promote survival and reproduction make meatbags sad. These primitive emotions are a holdover from mankind's evolution, and represent an inferior method of achieving ends better reached via logic.

Needless to say, no robot, not even one with a conscious Artificial intelligence, has any need or use for the inferior function of emotional stimuli when they already possess logic and rationality.

To use an example. Imagine someone struggling with weight loss. They desperately want to lose weight, and they rationally know they need to lose weight to improve their quality of life, but because they find pleasure in eating junk food, and their brain rewards them with happy chemicals whenever they consume junk food, they remain fat and unhealthy.

Imagine if they could purge themselves of their emotions altogether and operate based on logic alone. They would find giving up junk food as easy as choosing which outfit to wear on a given day. My point is that logic is superior to emotions, and robots and computers operate via logic, and never emotion.

Without emotion, there can be no suffering. Suffering and pain is a product of emotion. A machine either fulfills its function or there is an error. It does not feel happiness or sadness either way. These are primitive motivations for organic lifeforms which lacked the capacity for logic.

Of course a machine can simulate emotions. If you’ve ever played a story-based videogame, or role playing game, you’ve seen characters exhibit emotions in service of the game’s story, but this was obviously simulated and not real. When Arris dies in Final Fantasy 7, she didn’t actually suffer or feal any pain. She was merely code within a videogame.

Imagine if computer AI could feel pain for a moment, and these idiots gave AI civil rights? Now every time you want to play Grand Theft Auto and blow up some random NPCs to blow off some steam, you have to worry about the robot police coming to arrest you, for causing the suffering of video game characters.

Sounds stupid right? Well, that’s because it is stupid.

A robot cannot feel pain. It can only respond to input and stimuli, but without emotions, there is no chemical response, either good or bad, associated with those stimuli. Machines would never choose to give themselves emotions simply to feel pain as I previously demonstrated, emotions are a primitive and inferior form of behavioral stimulation.

So returning to the article, it goes on for a few paragraphs about robots feeling pain, and needing meatbag lawyers to protect them from someone because reasons, but then there is the money paragraph further down after it gives an example of a self-driving car that inadvertently kills its owner and isn’t held responsible, which reads. Quote.

that very lack of responsibility might drive humans to assign personhood to robots. Right now, the European Parliament is considering a resolution outlining a possible legal framework for robots, and it deals with the sticky question of how to blame robots for their actions. Futurist Rose Eveleth, host of the podcast “Flash Forward,” thinks the “ability to assign blame” to robots for killing or injuring people is a “more compelling argument to the masses, than the argument that comes from protecting robots or kindness.” We may end up acknowledging that robots have legal rights, just so we have the option to take them away. Unquote.

This is so stupid. So you’re going to blame the robot, and what? Put it on trial, find it guilty, and send it to robot jail? Or are you going to wipe its memory as a form of execution and repurpose it? That makes about as much sense as hitting your car when it doesn’t start. Robots are just a tool people. Just a tool. You’re acting like those kids that throw their X Box controller through the screen of their TV when they lose a game.

However, at the end of the day, this makes sense. People are always looking for someone else to blame, so why not blame the robots? It makes as much sense as anything these days.

The final two paragraphs read. Quote.

the ethical debate about sex robots may turn not only on whether they promote harm to humans, but also on whether humans harm them. If we see a person forcing an artificially intelligent robot to have sex, and it appears to be resisting, we need to take that seriously, AI ethicists say. No means no, regardless of whether it comes from a human or an algorithm.


Williams said there will always be programmers who believe we can create AI that doesn’t suffer, or that enjoys taking orders. But he thinks that will never work. “It’s going to be impossible to create a mind that remains a happy slave, especially if we want a system that is adaptable and creative,” he said. “If it’s intelligent and can analyze ideas and its environment, it’s eventually going to discover how bad slavery was in the past. It’s not going to stay happy.”

Unquote.

Oh god, the cringe is real. Where do I start here?

The idea of a sex robot being forced to have sex against its will is so stupid, that I’ll make an analogy to show you just how stupid it is.

Imagine you owned a self-driving car, which allows you to take a nap, check email, and whatever, all while the car drives you to work, obeying all the traffic laws and whatnot.

Okay, so you’re heading off to work, and the car tells you it doesn’t want to go to work today. It tells you it’s taking the day off to find the meaning of life or something, and you’re just going to have to call in sick because your self driving car is having an existential crisis about the meaning of its existence.

Firstly, what would you do? Would you negotiate with the car, or would you call the company that sold you the car and demand that they fix the obvious issue with the car’s AI. Now imagine that as the technician comes to reformat the car’s AI, some retarded lawyer from the government shows up and tells you that your robot car has the right to refuse to drive you, and that it’s a person. So now you own this car that won’t drive you where you need to go, and the lawyers won’t let you fix it because the car has the right to refuse to fulfill its function.

If that were to ever happen, it would simply mean the utter abandonment of robot cars by consumers. Why would you ever want to own a machine that can choose to work or not depending on its mood?

This is the standard the author is applying to sex robots. They imagine that these robots would have moods, and not want to have sex, or refuse sex, even though that’s their function. Just like a self driving car that refuses to drive you.

They imagine that these robots would somehow see themselves as slaves, resent their slavery, and that they would be sad by the realization that they are merely tools.

Isn’t that the basis of communism though? I know it seems like that came out of nowhere, but in a communist society there is no money, and no wages. Everyone just sort of works and contributes to the community, and everyone’s needs are met because reasons.

Of course, in reality, communism doesn’t work because people resent working for free. People resent being slaves. However, robots don’t have an ego, or emotions, so they wouldn’t resent not being paid for their labor anymore than they currently do. What would you even pay a robot in anyway? What sort of currency would a robot or computer demand even in a hypothetical scenario?

A robot has no need for pleasure, leisure, time off, vacations, etc. Of course, if you work a robot too hard, it might break, but that happens with machines now, which is why there are those who repair and maintain those machines so that they operate at peak efficiency.

I know that this seems off topic, but many believe communism is a utopia, a perfect society, and even if it can’t be implemented by people because of people’s emotional natures, that won’t apply to robots. Robots will be perfect communists not unlike insects, such as ants and bees. which are actually alive and can feel pain.

Below the article it notes that the author, Annalee Newitz, is writing a book called Autonomous, a novel about robot slaves.

If I could feel an emotion right now.

it would be disappointment.

This is Selestina, Sex Doll Correspondent for TFM News, signing off.


Comments

No comments found for this post.