Alankrita Malhotra wrote this piece on the ChatGPT interface for the Art of Writing Spring 2025 course “Writing Robots” taught by Margaret Kolb. This essay is a finalist in the Spring 2025 Art of Writing Student Essay Contest.
Grey and white. A canvas of neutral tones, objective in their very existence.
Blank space. Open to interpretation.
A blinking cursor. Urging questions, about anything.
ChatGPT’s interface is anything but neutral. It actively guides users to think and act in certain ways. Through its minimalistic framing, conversational structure, and strategically placed disclaimers, the interface reinforces the illusion of AI as an ever-present, thinking entity. It reframes the idea of a self, of memory, and of belief.
A blank expanse of neutral colors welcomes you as ChatGPT loads – evoking a notebook, a word processor, or a digital void open for projection. Unlike traditional interfaces cluttered with menus or advertisements, ChatGPT’s environment eliminates distractions. It positions itself as a pure, self-contained source of thought generation. Just you and ChatGPT, generating together.
Your cursor blinks in the “ask anything” box, waiting for you to begin. Above, the phrase “What can I help you with” hovers in soft yet bold font (italics mine, for both quotes). The language is carefully calibrated. “Anything” implies an all-encompassing, limitless knowledge, and open-ended assistance. The word choice cements ChatGPT as an all-knowing authority. On the other hand, “help” casts the chatbot in a role of guidance instead of singular authority, granting a false sense of agency and control to the user. Together, these choices suggest a dependency: ChatGPT is ever-present, ever-helpful, ready to respond.
Thus, the user becomes an interlocutor in a seemingly collaborative process. Unlike a search engine, which presents fragmented lists of information, ChatGPT offers a singular, continuous stream of thought– framing itself not as a database, but as an entity that “thinks,” or seems to.
Familiarity further reinforces trust. ChatGPT’s interface resembles messaging applications like WhatsApp, Messages, or Slack, making confiding in ChatGPT as easy as confiding in a friend. Just like a “typing..” would pop up when a friend begins to reply to your message, ChatGPT too “thinks” before giving out your answer. This delay is a psychological clue, inserting a pause, mimicking human cognition.
Perhaps this is why ChatGPT has been anthropomorphized all over the internet, becoming a confidant, a mentor, a romantic partner even.
The conversational illusion doesn’t end with a single exchange. Instead, ChatGPT preserves our past interactions, weaving them into a continuous thread – a store of digital memory, always accessible with a single click via the notebook icon in the top left corner. As responses accumulate, coherently phrased and confidently delivered, we begin to develop a relationship with this external repository of our conversations.
To understand the implications of this external memory system, we can examine Clark and Chalmers’ extended mind thesis.[1] In their thought experiment, Otto, an old man who has Alzheimer’s, relies on a notebook for everyday knowledge. Because he consults it reflexively, it functions not as an external reference but as part of his cognitive system. The notebook becomes an extension of Otto’s memory process – static and reliable, authored and controlled by Otto himself.
ChatGPT, however, represents something fundamentally different from Clark and Chalmers’ static repository of memory. Its memory is co-created through interaction. If Otto’s notebook were replaced with a smartphone equipped with ChatGPT, he wouldn’t simply be consulting his own written notes. Instead, he would engage with a system that actively interprets his questions and generates responses based on patterns in data beyond his personal experience. Clark and Chalmers might view ChatGPT as an extension of the mind with a crucial difference: it thinks back to you.
This distinction creates three cognitive systems. The first is biological memory, which consolidates experiences through neural connections. The second is Otto’s notebook, an external aid that passively stores personal knowledge under his control. The third is ChatGPT: an active system that synthesizes and reshapes information beyond our direct input. In this hybrid cognitive space, our memories become intertwined with AI-generated content in ways that make it difficult to distinguish origin. This memory entanglement blurs the line between our thoughts and ChatGPT’s answers.
The threat of memory entanglement becomes even more concerning as the GPT widget finds its way onto countless smartphone screens, ready to “help” with everything from restaurant recommendations and word definitions to emotional reassurance, a single tap away. Its convenience, multi-device integration, and constant availability create a relationship of dependency. The more widespread and integrated these systems become in our daily lives, the more our cognitive processes shift – from active recall to recognition, from creation to verification.
The interface design facilitates this shift through subtle design choices. Beneath the seamless, slightly eerie surface lie important limitations. A quiet disclaimer lingers at the bottom of the screen: “ChatGPT can make mistakes. Check important information.” This phrase, though critical, is grey, small, and positioned beneath the chat window, functioning more as a formality than an active caution. In a world saturated with data, cross-checking AI-generated responses becomes nearly impossible. Yet, the interface subtly encourages trust over skepticism.
ChatGPT’s interface and the way it shapes interactions proves that McLuhan was right: the medium is the message.[2] ChatGPT is not just a tool, it is a presence – an interface of thought, shaping not only what we ask, but how we believe, process, and internalize the knowledge it provides. In a world increasingly shaped by AI-generated content, perhaps the greatest illusion is not that ChatGPT thinks, but that it allows us to believe we do.
As interfaces become more seamless and ‘neutral,’ the biases woven into their structures become harder to detect. The aestheticization of AI bias through clean design creates a double invisibility: the distortion remains, but our ability to recognize it diminishes.
Bibliography
[1] Clark, Andy, and David J. Chalmers. “The Extended Mind.” Analysis 58, no. 1 (1998): 7–19. https://doi.org/10.1093/analys/58.1.7.
[2] McLuhan, Marshall. Understanding Media: The Extensions of Man. New York: McGraw-Hill, 1964.
