Through conversation with a chatbot, a human becomes the star of an interactive story that they help to write. Rather than limit our description of chatbot behaviour to simply dialog, film theory terminology adds the required support for the emotional content and the periphery found in a text conversation. Suspension of disbelief comes into play when an interlocutor believes they are chatting with another human. (Hayward, 2001, p.78) Alfred Hitchcock’s use of “the MacGuffin” applies to the goals of the programmer. (Niemand, 2013). The human is generally not interested in those, depending on whether or not he/she thinks they are talking to another human. Montage effect, first identified by the Russian film theorist Sergei Eisenstein, applies to the utterances of a chatbot.
Most people have an opinion about what the word conversation means to them. One view is:
The essential element of a conversation with a chatbot is that the person cares if the chatbot understands what was said. The person is looking for sentences that convey enough meaning so that the person can feel there is a point to the chat. If the human, gets the sense the chatbot does not understand or is randomly producing sentences the person shuts off and resorts to testing sentences not conversation. (Burke, 2013)
I have to agree with this. The person stops participating with the imagined human when suspension of disbelief is broken.
Years ago, before I had any experience with Turing tests, I worked with a colleague named Paco Nathan. He had one of the first online bookstores around 1995 and we experimented with a chatbot. It was a C++ program called FRED at first, but later was developed with Java and that was called JFRED, and a data format we called JRL. I noticed conversation logs where a person would have a great time chatting, and eventually say “goodbye.” These were happy accidents. They were flukes where the right thing said at the right time would cause the person to open up and chat rather than interrogate (Caputo, Garner &Nathan, 1997).
When these “happy accidents” occur, there is a reason why our minds perceive they are chatting with a human and do not realise the bot was a machine. Some people believe they are very good at it. If you asked the average person if they were good at talking with chatbots, most of them would not know. But the expert chatbot talker might not make the best chat participant from the perspective of the chatbot developer. It is a young technology that has not been considered a form of literature. It is interactive fiction on some level, but is also a simulation of a conversation to some. “If a person does not see there is a point to the conversation they will not engage” (Burke, 2013). This is true perhaps except when the person’s point is merely to engage and see what happens. As a chatbot developer, I am interested in what does and does now work.
Some would say that it mostly does not work and that this is fair. However, for 20-30% of the normal population, better results are found in Shah (2012). In a collection of conversations comprised of around 2,000 online Turing test simulations conducted at The Turing Hub, we saw that the JFRED bot gave reports of 5% “not working.” So 95% of conversations surveyed were considered to have served some purpose for the human chatter. Seven per cent reported that they had been speaking with another human being. A higher percentage, 20% ranked their conversation as “sort of human.” (Garner, 2013)
In cinema, suspension of disbelief happens when a person is watching a movie and they forget that it is not real. When talking to a chat bot, the bot does not deceive; the people let themselves forget it is a computer program. (If they are among the few people this actually happens to.) When people go to the cinema and they enjoy it, they may know the whole time that it is made by a movie studio, but from time to time, they may find themselves forgetting about the machinery, and focusing on the story, the dialogue, and the characters. Sometimes something similar happens with chatbots.