Home Artists Posts Import Register

Content

no QC is not going to tie into alice grove

Files

Comments

Ísabel Pirsic

Engineer Chow, an engineer’s favorite chow, better even than Bachelor Chow. How thoughtful of them 🤖☺️

Bagge

Getting a promotion during the interview is like getting top grade, right?

wargrunt42

The director speaks exclusively in capital letters... I'm thinking the director is Crushbot or closely related.

Todd Ellner

I have a friend who was tapped for an enormous charitable foundation. They had more money than mission and not enough people, process, priorities, or well anything. She wrote her own job description. The first thing she did was squeeze openings in the Org chart for things she had no.idea how to do and hired the people to fill them. Then she got folks to commit to.priotities. then she created records keeping and administrative processes, and where she didn't know she got input from the department heads she had hired. Then she started doing the job she had been hired for

David Howe

Turns out it is not even a company - the Director started a workplace LARP and it got a bit out of hand....

Kate Cole

I only just realised that the drone is the camera for the video call.

Anonymous

You know... if she asks for $10m salary and work-from-home, this could be fine.

Anonymous

Having had 6 job interviews in the last 2 months and no offers, I'd kill for an interviewer like Moray.

Mad Marie

Despite Jeph’s comment, I am still of the belief that Alice Grove’s first job was delivering pizzas while wearing a mask…

Andrew

“Hmm, you now outrank me. Should we swap this interview around then?”

Mark

Claire's panel 5 face over the last three strips has just been getting more and more grumpy, hasn't it?

Anonymous

Pizza delivery is a very heroic side-gig. Good enough for Peter Parker and Hiro Protagonist. But I'm thinking we're in more of a Travelers situation. The Director's probably Yay's old colleague from the ShadowNet Co-Op.

Anonymous

I should have more pressing questions, but my real concern: when AIs are quoting another AI, do they just synth their voices or play sound bytes recorded in their memory? This *will* keep me up tonight

Todd Ellner

Everyone thought she was a genius, especially when she told them stuff that is Business Administration 101. She eventually moved on to a better paying job at Amazon

Derrik Pates

He says that now, but watch, the last strip will say "Read Alice Grove for the rest of the story." Just watch.

Kyle Rudy

You say that as if there's a functional difference. Voice communication contains abundant sideband information. Intonation, stress, and a bunch of other shit lends emotional and literal meaning. To an extent, contemporary voice synth is a historical accident of attempting to transcode plaintext into audio output without those sidebands encoded in the input, and that's why it sounds so janky. But that's not always the case. I suggest googling for "inputmag.com thomas buchler tropetrainer", and reading about how intonation and stress are encoded in the Hebrew holy texts with trope marks. Ancient Jewish scholars thought deliberately about their problem and created a specific solution. An algorithm that perfectly encodes a voice clip's phonemic intent, with all the perceived sidebands preserved, as well as the particular voice's method of delivery, how it sounds, what frequencies it uses, etc? That's nothing more than an impressively effective, application-specific compression algorithm. Storing uncompressed audio files in your memory would be the android equivalent of something hanging around in a human's echoic memory. Perceived but unprocessed, literally something you have not thought about. It's unquestionable that their raw capacity for perceptual input would be arbitrarily extensible, but as a matter of both efficiency and respect, it likely remains small. Efficiency, because saving cycles by adding memory is a loser's game for inputs in broadband spectrums like voice, and respect, because encoding is how you prove to yourself that you were listening. That last point bears inspection. An AI can guarantee their own understanding of a voice clip by translating the clip into a vocoder definition for the speaker and their encoded intent, then translating it back and comparing the output to the raw input. They can literally read back their instructions in their collaborator's own voice, to make sure that the intent they encode is likely what was meant. And if there's noise, if there's a discrepancy to how they sound, that's a sign they're not getting all the sideband info. They're missing something, eg, sarcasm, and they can pick that up with further communication. This is how I decided to spend twenty minutes on my Sunday afternoon; how are you doing?

Peter Jensen

I mean ... for the sake of the cast I hope not. That story didn't exactly have a happy past.

Miyaa

And this is why despite all the red flags, and it’s clear Claire sees then now, I think Claire will accept. Also, May you get the perfect job offer Lucy.

Some Ed

@Kyle Rudy: I like how you seem to presume the speaker actually includes the "sideband" information they want and only the "sideband" information they want. As someone on the autism spectrum, I feel like I'm probably lucky if I include any of the sideband info that I'm feeling. I have the feeling that part of what the square speech bubbles indicates is that stuff generally isn't there to any greater extent than it is in our voice synth. Also, if some is there, it's probably an accident and whatever sideband information is encoded is random. That said, I otherwise agree with you.

Some Ed

@joe velsher: It's my guess in this case that there wasn't a the director's voice to record. This is just a text to speech of the message sent. #teamfloatingslab

Some Ed

Well, it *wasn't* a company. But the LARP got so entirely out of hand it now is one. At least on paper. Half the employees they had in their initial filing don't actually exist and half of the ones that do have not noticed how far off the deep end the others have gone, including the bit about actually filing incorporation info for their game company. This includes the GM. Edit: Forgot to mention that most of the ones who are aware of how far this has gone are laughing their asses off, blissfully unaware of what legal ramifications might be headed their way.

Some Ed

@Andrew: not going to happen in this case; Moray's in another department. Specifically the department you really don't want providing the only interviewer if you're looking to not end up running a brand new department. Because, you know, if there was a department to do what it is you do, one would expect they'd be involved in the process, and it doesn't look like they are. I mean, if you believed Moray, they are, as Claire's their head. But since she hasn't accepted yet, she really isn't. Also, that whole "have they accepted yet" is a critical piece of your idea about turning the interview around. Because if the answer is "no", then they clearly don't now outrank the interviewer. Also, if the answer is "yes", the interview is over so it can't be turned around like that. That said, if one is giving an interview to somebody who would work in a department that your company has not actually even defined yet, let alone hired for, one might be kind of desperate to get the person to accept the offer, and I can see how such a ploy could manipulate some people into accepting a job offer without really thinking it through. It can be important to keep in mind what the actual requirements for accepting a job offer are in your jurisdiction. I'm reminded of a friend who had an interview with a company she was rather skeptical of. She needed a job, however, so went braced to say no and to be a stickler about getting all the details about salary, benefits, and anything that could address her concerns that they seemed shady. She came out of the interview in substantially less than the expected hour rather dazed. They'd given her an initial task list for her to get started working on immediately. If she hadn't had friends standing by to talk to her after the interview, she might not have realized that they didn't talk about salary or benefits. They didn't talk about any of her concerns. They didn't ask if she was going to accept their job and there were no documents signed. I'd like to think that I'd have recognized that kind of a ploy without the assistance of others, but I haven't been in that situation so I don't really know.

Brooks Moses

I believe there are many things that Jeph has said in those comments that are not entirely the truth.

Anonymous

Or Death? Hoping for Discworld connection if Alice Grove is not happening.

Kyle Rudy

That's a good point, Some Ed, but there's more to communication than what the speaker intends to send. It goes both ways. Yes, the speaker might not say everything they want to say. It's also a common case that the speaker betrays more information than they might intend! A perfect model / compression algorithm against human speech will include implicit pattern recognition for, among other things, lies and regret. It's often said of children that their mothers know them better than they know themselves. This can be verifiably true, this can be measurable, if you're asked to repeat something you just said and your margin of error is higher than the recollection of an AI that has modelled your expression beyond your intent.

David Howe

Probably their first invention and source of most of their funding. Which is fair enough; No self-respecting engineer wants to break for meals or experience the evil daystar until they are DONE, dammit...

Anonymous

This is super interesting. I think the AI would benefit from doing what you suggest Kyle, and I like your proposal because it really investigates how AI are different from us and would therefore consider information differently. I never quite thought of it this way before, but it makes a lot of sense! Very signal theory, which is a compliment! Towards Some Ed's point though, I think AI would also benefit from (probably not all the time, but sometimes) doing a technique that benefits humans: namely, taking what you hear, mentally chunking and translating it into something that is comprehensible in your own mental schemas (which is necessary for processing anyway), and then saying something like "this is what I think you meant" and repeating back your understanding of what the speaker said in your own words (with your own intonation) to get their feedback. This helps correct for the fact that people (probably especially people on the autism spectrum but also neurotypical people and probably also QC-style AI) very often doesn't necessarily say what they mean, either in their primary message or their sideband information. It also reassures the human or AI person speaking that you are actively listening, that you care about understanding things correctly, and that you want to check your own assumptions, which is something that I know you emphasized in your method of AI listening as well. Of course the "let me repeat this back to you" technique isn't perfect. First, when you say something in your own words you're using your own personal schemas, which the other person may not share, so even if you actually translated well they may not recognize that in what you say. Second, to your last point, people may communicate unintended information in their speech. That's not always a hurdle, though. Sometimes if you point unintended information out to the speaker, they will acknowledge it (or recognize that it is true even if it was unconscious) - this is certainly a technique that many therapists will use, though of course you need to be careful with it as people might not react well, depending on what you say! Other times when you point out something unintended the speaker may deny it, but that can give you information too. Very often it means you misunderstood, but even if you didn't misunderstand, the way in which the speaker denies your interpretation of what they said may give you additional information - for example, many therapists will analyze deflections to get a better sense of what a person is thinking/feeling, what defense mechanisms they might be employing, etc. All of this of course is a lot fuzzier than the encoding and then reconstructing and comparing you propose, which has the distinct advantage of a very objective measure of correctness. It certainly seems like a very good proxy for understanding, but of course like any proxy it's not perfect; as you say, the AI may fail to recreate the soundbite because they're lacking enough information and missing something they need to pick up with further communication, but there might also be more than one reason that a person would (advertently or inadvertently) land on a particular phrasing and intonation, so even if you reproduce their speech exactly there's some chance that it might be for the wrong reasons. So that lends itself to supplementing this method with other methods in times when it's really important to get something right. And that's how I spent 20 minutes of my Monday afternoon! Here's to nerds everywhere.

Todd Ellner

They tend to use their own since AIs copywrite their voices and use them.as a secondary form of ID