Generated by Dreamstudio. Prompt: Am I a robot? 4k hd, synthwave, vaporware, outrun, cyberpunk

Cognityping: I am not a robot

Noam Tenne

--

“I am not a robot” has become a standard internet pastime, introduced as a stronger captcha mechanism to battle increasingly sophisticated crawlers, spammers, and bot nets.
These mechanisms challenge you to identify skewed alpha-numeric characters, objects such as traffic lights in photographs, and now even the correctness of an AI-generated image.
Though captchas started out as simple verification, our challenges are now elegantly closing a loop by being harnessed to correct and teach machine learning models.

I believe that with the recent advancements in AI, the weight of the phrase “I am not a robot” is shifting. What started out as an attempt to assess the legitimacy of an agent is now an attempt to assess the agent’s type of cognition.
Doubts about legitimacy have become part of our day-to-day lives. For example, recent political elections were affected by malicious agents operating botnets that spread misinformation.
Doubting the legitimacy of content on the internet is by no means a new concept; it’s a healthy practise that can save you much embarrassment, as even established news sources sometimes fall short on fact-checking.

But this new type of doubt is different. The question is no longer just:

Is this real?

The question has now become:

Are you real?

Culturally, this is not confined to spam, as we also see the effects in our language. For example, NPC:

A non-player character (NPC), or non-playable character, is any character in a game that is not controlled by a player.

Is now also a term that’s used as an insult:

… suggest that some people are unable to form thoughts or opinions of their own.

We are now entering an era in which judging whether a creation was produced by AI or not will become harder and harder.

Was this text created by AI, and if so, is that necessarily a bad thing?

As a personal anecdote, I make an effort to thoroughly edit myself when communicating over text because I understand that much of the nuance of verbal communication is lost. When communicating verbally, we can allow ourselves to tap into our stream of thought and express ourselves more spontaneously, but taking the same liberties over text can turn our writing into a painfully incomprehensible lump of gibberish.
And so last week I took great care to handcraft an artisanal Reddit post. Flawlessly explicit, perfectly verbose, and in synergy with the rules, because I know what Reddit is like and I didn’t want to get ripped to shreds.

Funnily enough, the first reply to my post was:

What on earth are you actually asking? Sounds like crappy AI generated text.
On the off chance you’re real, …

And I honestly don’t know how to feel about it. Do my efforts to communicate clearly come off as robotic? Should I throw in a “lol” or two in there?

Beyond that, should I be insulted when someone wrongfully assumes my type of cognition?

This got me thinking about parallels such as misgendering:

to identify the gender of (a person, such as a non-binary or transgender person) incorrectly (as by using an incorrect label or pronoun)

An act that's harmful to groups of marginalised people who are denied the identity they ascribe to themselves.
These individuals who are being misgendered are suffering, and they are paying for it heavily.

Will there ever come a time when other types of marginalised individuals must suffer or pay a price for their type of cognition being wrongfully assumed? Could this possibly have an adverse effect on groups such as neurodiverse people?

Some boundaries are already being drawn. Online communities such as StackOverflow are banning content generated by GPT in an attempt to reduce low-quality content.
One could argue that some individuals are already paying a price for these wrongful assumptions. A very recent example of this is a professor who failed students for supposedly using AI to write assignments.

Like many problems, I think this one boils down to context.

In academia, courses are designed with specific outcomes that students are expected to learn. The work submitted by a student is a testament to that learning, assuming they wrote it on their own. Now throw AI into this mix: did the students formulate the answers on their own, or did they cheat by using AI to supply the correct answers? What if they formulated correct answers and used AI to add fillers, elaborations, or examples? This becomes quite messy, so you can understand why simply banning AI is the most pragmatic approach.

On the flip side, we have the software industry. Software engineers are measured by their speed of delivery in conjunction with the correctness and quality of their code, and it has become perfectly acceptable to use AI-generated code. This is fair, as long as AI-generated code is produced at least as quickly and correctly as code produced by humans. That being said, AI-generated code changes some concepts of quality, which are an inseparable part of modern software delivery. If code is generated by AI, does it still need to be deemed maintainable? Do you still need version control?

I expect that in the same way that constantly evolving cyber-attacks fuel the lucrative industry of cyber-security, we may see constantly evolving techniques of synthesis fuel an industry of detection.

If detection becomes harder, will proof become harder as well?

And if so, will this have a negative effect on people who are physically, mentally, or neurologically challenged?

Being a fan of sci-fi, my mind can’t help but wander to Blade Runner-esque dystopian futures, where the wrongful detection and assumption of an individual’s type of cognition may lead to sanctions. Perhaps you have failed to prove that you are not a robot, and so your bank account has been frozen.

I think it would be helpful to have a term that describes the act of assuming an entity’s cognitive abilities or the cognitive identity of a creator.
And on top of that, the feeling of one’s cognitive identity being wrongfully assumed.

I propose the use of “cognitype” with “cognityping” as a noun or “miscognityped” as a verb.

Thanks to Yoav Luft for reviewing and commenting.

--

--