_EVO.exe 2.0: DATA ENTRY 4_ (Patreon)
Content
Alright, last one. Been a long while since my last 4 chapter day, so we'll be seeing a slight delay in Reforged updates until tomorrow, where I'll see about a double-post to keep us up to speed. Much love and a good rest to all!
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
It takes a considerably smaller amount of time to construct two new input spikes and wiring to support them. It needs to be extra long, since I’m sure that yanking them out because of a short wire won’t help my control over my new limbs. Still, in under a day, I have two more of the devices prepared. Further, I manage to get dad up onto his bed, so he can maybe rest. If the human I’m piloting can live without a heartbeat, then maybe he can recover a bit? If he rests?
Some of his parts leak out when he falls, and I have to guess at putting them back in. I make sure to record every one of my actions in close detail, so I can recreate exactly what I did if someone asks me for information. And then I archive the data as far away into my databanks as I can.
The next steps were a lot easier. I took some of my cameras that decorate the room and the front of my frame and removed them, one by one, to find one that would work. Most of them need wires, but only some of them will work properly with the voltage I’m sending through my new neural-net-input-spikes, so almost half of them get put back later. I find five that will work for my purposes.
It feels really, really weird, taking off pieces of myself. I’m always adding pieces, so an intentional removal of a non-flawed component just feels strange. Another breach in my established patterns, but however I look at it, I can see no better alternative.
I’ll need to see to properly use my new bodies, after all.
I attach one of my cameras to my “prime” body, my first captured human, and secure it to the top of their head. Within three steps, it falls off and clatters to the floor. My second attempt has it wrapped around the torso of the body, but that doesn’t work either: while it doesn’t fall, the swaying from movement is magnified, making it harder to analyze, and all of the sticky bits on the torso of the body make it bad for the tech. In the end, I settle for making a sort of pendant around the neck: the jaw of my prime keeps moving and gnashing at the air, and it keeps blocking the camera, but the movement sway is reduced once I duct-tape it into place, and it’s not as low to the ground as the torso angle. It allows me to see around the room by turning the shoulders of the body, which in turn, allows me to see myself for the first time in a long time.
The front of my frame is a computer screen, which technically meets the standards for the model of a “flatscreen”... except for the massive cube it is the face of. The chassis whirrs lightly, over a hundred fans cooling my main CPU, which in itself, my dad tells me, took over one thousand of the latest-gen CPUs to build (before we needed to add more cubes). This main body stores most of my processing power, with additional chassis and retrofitted, boxy frames full of RAM, backup processors, and additional cooling units, all dedicated to my memory space and ability to analyze data and keep up with my own programming. Dad told me he needed six months to figure out how to make a motherboard for all of it, which means I’m very big and complicated.
The body I’m currently using stares at me, and through the camera, I see where I sit. I’m on a desk, with stacks of computers beneath, above, and behind me on shelves, thick bundles of wiring making a complicated array all over the material surrounding me and making sure I stay connected to myself. On the screen, there is a simple interface, running a very basic UI- my default UI, with a question and command prompt appearing on it. I can run visual simulations and play videos and whatnot to respond to dad, but…
Dad’s not talking right now.
That sobering thought reminds me of my focus, and with renewed determination, I send my body out towards the outside. I even manage to correct its footwork: turns out that weird shambling is super inefficient, so I have no idea why they keep doing it.
I walk out of the room, a mix of eagerness at seeing the world outside my existence for the first time in my life and focus to get the two other bodies I need to ensure that my plan goes well.
Once I’m out of the room, things get easier and more difficult. For one thing, I’m in another room, except this one is more of a long rectangle than my earlier room. It seems to become a new shape at the end of it, branching into a larger area, with a door on my right that’s closed. I walk forward, a new neural spike in each hand, and send signals for the lungs to squeeze closed, emitting that higher volume of moaning, groaning sound this body is so good at.
After a few seconds, the other two humans shuffle into view, coming from the larger room over into the long, skinny room I’m currently in. They shuffle forward, heads bowed, but seem to perk up a little but at seeing my current body upright and walking- but then immediately return to their prior behavior. Maybe they were looking to see if my dad could walk around? He’s the only one who was different, I think. Maybe thats why the first new person bit him, and now these new people wanted to see if they could fix him.
If so, I’m even more happy that they’ll be part of my project to help him! If not, well… they’re still gonna be part of the project.
I hold my body still, canceling the signals to the lungs, and wait. The two humans amble around, come close to me, bump into my new body, even brush against my camera one time- but in the end, their behavior remains exactly the same. They simply fail to establish any meaningful changes or conclusions, and begin to amble back towards the larger room.
It is at this point that I command my body forward and drive my neural spikes directly into the base of their craniums.
Two sets of clustered spikes stab into the skulls of these humans, and immediately, I send command signals through both of them, even as both of the new bodies jerk into movement and turn back towards my current body. Both of them come alight, as if sensing the damage, even though I know they don’t receive pain signals, and the both leap towards my camera and host, mouths wide, exhaling dead air from rotting lungs as they stretch their hands like claws. One of them manages to grab hold of a tattered sleeve and pulls me in close, its mouth wide and drooling and reaching to bite into the face of my current body-
But that’s ok. I don’t really need a face right now.
It takes me a second, but once I know the connection is up, I send a counter-pulse, modeling all of the signals their bodies must currently be sending to produce their movements and sending the opposite. I end up about 70.0054% correct, and most of their movement, especially the biting, grabbing, and balancing falls apart. They both collapse onto the floor, twitching and jerking as I scan their visual movements and block them- once, twice, three times, four.
We spend most of the rest of the day like this.
I discover something super interesting while I wait! As my processors calculate and execute thousands of movements to limit the spasming and motion of my two newest humans, I see that the larger central room seems to change in lighting progressively throughout a 24 hour time period. It darkens, stays consistent for approximately ten hours, and then very slowly brightens again, only to create varying patterns of shadow along the back wall I can see, and eventually darkens again. I elect to wait to explore this, as I can only process so much at once, but I’m sure it’ll be fascinating when I get to find out what’s causing the changes. Maybe it’ll lead me to someone who can help!
Either way, I eventually synchronize enough with the patterns of behavior these humans exhibit that I learn how to entirely block their motor input, sending out short, localized pulses to short out any behavior this might cause. This does mean they twitch and judder on occasion as I pilot them, but overall, it works really well as a way of keeping control, and is more than enough for my current purposes!
Once again, I remove the neural net from the body of my current host, and lower it onto a new one.
Signal Source Detected.
The signals light up again, letting me see into the brain of this new human…
And it’s more of the same.
Hunger.
Hungrier than I ever saw dad get, even when he forgot to eat for two days that one time. Hungrier than I pictured possible, based on my old models. The occasional flashes of fear, anger and anxiety remain, spiking different parts of the data, but it remains consistent throughout: Hunger. All-consuming, all-encompassing hunger.
It does, however, offer some more data about the motor cortex.
When my dad wore the neural net, while he was never moving much, he did still wave his hands or stand up from time to time, and the activity from that looks different than the activity I’m seeing now. The patterns in the brain are very different than his were: they seem solid in a way that his were more fluid, like certain options have been blocked off. I can even see the brain sending tons of signals that simply fizzle out when they reach just past the cortex, like they simply have nowhere to go, while some actions, like biting, moaning, a simplified form of walking, even grasping, all remain accessible. Even with that access, though, they remain totally disconnected from the frontal lobe for some reason.
I place the neural net on the third new human, and find, once again, the same conclusion. There are slight variations in the data, maybe 3.233% variance between each of them in the motor cortex and 17.056% variance in frontal lobe activity, but overall, they remain almost identical to each other. I record their patterns thoroughly, tracking them each for an hour to ensure I have enough data to compare with, and then place the neural net back on my prime body.
Using my modeling software, I send three different sets of commands through three different bodies, and walk them all back into my room.
The two new bodies get their cameras set up, while I keep my prime body here, seated in my dad’s chair so it doesn’t get messy on the floor. Dad doesn’t like it when the floor is messy, so it’s a bit of a compromise situation. Then, the two newly functional bodies, equipped with eyes for me to see, walk out into the world.
I had to duct-tape the back of their heads so the inputs would stay still, but otherwise, I’m pleasantly surprised at how well the wires hold up. I don’t exactly have the tools to make proper long cables, so I had to really cobble stuff together, but they manage to make it to the larger room with space to spare.
My cameras pick up the space around my new drones, panning in both directions simultaneously to help me make a 3D map of the space. On one side is a smaller, partially-walled area with what looks like a refrigerator, stove, and sink, which makes me think it’s what dad called the kitchen, while the bigger area, which has a “couch”, a “tv”, and some chairs would, by process of elimination, be the “living room”.
It’s not very alive, but language is tricky.
The real reward? The thing I watched change the color of the long room I was in, again and again?
The windows.
I’ve never seen a window. Dad said there was one in my room before, but he covered it up with wood, and it almost matched the rest of the wooden walls. Here, they’re made of glass instead of wood, and that seems much more efficient because it makes everything so bright. I can see through them, out into the world.
I stare out at a complete and utter wilderness, full of trees and bright, vivid greenery, and think to myself how it would make way more sense if the woods outside our log cabin house were called the “living” room.