ChatGPT. OpenAI’s newest language model and currently the most advanced publicly available chatbot in the world. ChatGPT’s ability to create content has far exceeded most people’s expectations, appearing to be more than the mere conversation-makers that previous chatbots were, but instead a primitive form of general AI. ChatGPT is capable of writing everything from complex code, to poetry, to genuinely funny jokes, to entire articles. In fact, had it not been forbidden by the Centipede’s rules, this very article could have been generated entirely by ChatGPT– and I doubt many people could have realized it.
The reactions to ChatGPT are varied: some are thinking of ways to profit from it, some are dismissing it as a fad, and some are taking action to prevent its use, namely professors and schools. But ChatGPT is so much more than a tool to write tomorrow’s English homework. ChatGPT gives a glimpse to the future of information– and with it, also the future of politics, economy, and lived experience itself.
What is revolutionary about ChatGPT is not so much the quality of the content it produces, but the efficiency in which it can do it. Give a Concord Academy ninth grader enough time, and they can write an essay just as well as ChatGPT can. But ChatGPT can do it in ten seconds, for every computer, with practically no effort coming from humans. What takes Russia millions of dollars to run troll farms for the generation and dissemination of political propaganda, lies, and manufactured outrage, can now be done in any individual’s computer with practically zero effort, money, or manpower spent, and virtually impossible to track.
One consequence of this is that the internet will become an even less reliable source of information, and on a more fundamental level. Nowadays, the appropriate level of caution regarding the authenticity of information in the internet is roughly that of a shallow distrust of the content of the information that is passed through internet mediums. For example, let’s say that you read a tweet that says the following: “Only thirty people die from guns in America every year. It’s not that dangerous!” Today’s responsible netizens would see this tweet and be skeptical of the statistic that the tweet is using. Perhaps they might find out that it is untrue (it is) and validly deduce a sentiment such as “This person may be irresponsible with her use of this statistic,” as well as a sentiment such as “There exists at least one person on the web who is misinformed about this statistic.”
But in a near future in which AI-generated content can dominate the internet, merely being skeptical of the content of the tweet is not enough. One must also be wary of the very existence of the person who tweeted. A sentiment such as the ones above will no longer be valid at all. It may, for instance, be the case that the tweet was one of millions manufactured by an AI used by a malicious individual in order to spread a false conception that there are such irresponsible people on “the other side”.
Another example to illustrate this point: in the future, when you see a racist message in a youtube comment section, your hurt will be real, the racist message will be real, and the racist idea will be real, and yet you would not be able to rationally deduce the existence of an actual, real racist that has a one-to-one representation relation to that comment. Your scathing anger must be pointed nowhere else but to an unknowable, hyperreal something.
This is just one of many instances in which currently valid epistemological deductions can be rendered invalid by technologies that ChatGPT represents. And when there is confusion about the elementary unit of discourse, that is, how we know things, it is bound to have wide-ranging implications for every facet of life, especially since the internet and the non-internet are dynamically interacting domains which shape and build upon each other.
When the internet becomes a bit less real– and it will, as proven by ChatGPT– so does life. And the slightly less real life creates a slightly less real internet, and so on, so forth.
In the end, I fear that we will be left with lies to tell lies.