2

The "question" My new coworker is actually ChatGPT is clearly a piece of fiction and should be closed.

We shouldn't encourage this sort of fantasy writing.

5
  • 1
    Hmm, I see that the questions it asks are "Should I?" questions, which are off-topic...
    – DarkCygnus Mod
    Commented May 3, 2023 at 18:30
  • 2
    given the nature of discussed question trolling tag seems to be appropriate
    – gnat
    Commented May 4, 2023 at 6:22
  • 2
    very close: workplace.meta.stackexchange.com/questions/5037/…
    – AakashM
    Commented May 4, 2023 at 8:54
  • 2
    It’s trolling, there is zero upside for having it use the time of people who want to help real people with real problems. Vote to close.
    – mxyzplk
    Commented May 5, 2023 at 2:32
  • It certainly looks fantastical to me, but I have no experience with this sort of thing so I'm just going to watch from the sidelines sipping whiskey
    – Kilisi
    Commented May 6, 2023 at 19:33

2 Answers 2

4

It should be discussed, as hyperbole and misunderstanding, rather than fantasy.

There's no way the job is being done autonomously by ChatGPT any more than it is being done autonomously by a chainsaw. That aspect is fantasy or hyperbole. But it's also an understandable mistake, given it's new tech, people are going off fictional TV and movie representations, and this SE site is in a good place to do that education and draw those distinctions.

It's entirely plausible that someone is relying so heavily on ChatGPT, say to run multiple jobs, that the symptoms are as reported in the question. Indeed, that's a more plausible explanation than the one given in the question, as ChatGPT wouldn't take hours to respond.

There are already news reports of people using ChatGPT as grifty overemployment.

I would argue the question is relevant, and these details should be thrashed out in the comments and answers, rather than putting the whole thing out of bounds by closing the question because someone misunderstands what the tech can do.

4
  • 3
    You're assuming that it's a fiction, and not a real person working with a grifting colleague, who has slightly misdiagnosed the source. Can we really be so confident?
    – Adam Burke
    Commented May 4, 2023 at 5:11
  • I'm not sure it's fiction. I'm not sure it's paranoia. I'm not sure it matters: it isn't in the querant's job description to police this, the situation if true will self correct, and taking this to management without more convincing evidence has high odds of looking like trying to sabotage a co-worker rather than trying to protect the company. The security question has been raised, but that too is potential rather than actual at this point. For all we know, this was a matter of tweaking the querant in response to an inappropriate question
    – keshlam
    Commented May 4, 2023 at 13:17
  • 1
    You may well be right, but that all seems on topic for WSE, rather than a reason to close the question?
    – Adam Burke
    Commented May 5, 2023 at 0:29
  • 1
    I'm agnostic about closing the question. As I say, I think there are useful teaching opportunities, though perhaps not without rewriting it into the more direct "If I think someone at work is using an AI text generator without saying so, what are the actual risks to the company and what, if anything, should I do about it?"
    – keshlam
    Commented May 5, 2023 at 2:33
1

It could be a legitimate piece of stupidity. On either the questioner's part or that of their cow-orker.

I'm inclined to answer as "Even if you believe this, it's none of your business as long as they are being productive; leave them alone and do your job." Which is a good general answer to most workplace fantasies, including the several romantic fantasies that have gone by in the past year.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .