The novelty and acceptance of Conversational AI

Exploring psychological factors of Conversational Design, like trust and social influence, that can impact user adoption beyond initial novelty.

A lounging garden gnome with a pipe — intending to highlight the idea of novelty.
Colibri1968 via Wikimedia Commons, Public Domain

Like many others right now, I find myself racing to define a Conversational Design practice. On one hand, I see consistent research, data, and opinions that almost everyone despises chatbots. On the other hand, we’re experiencing an advent of conversational experiences that appear to be increasing in occurrence and variation.

Love and Hate

Working in the product design space, I LOVE understanding why humans act, react and generally perceive things the way we do. Throughout my career — I’ve repetitively gone looking for a deeper understanding of human psychology and it’s always informed how I tackle the next project or feature. I would even go as far as to say that Product Designers should be psychology enthusiasts.

An image of several rabbits, with one peeking out of a rabbit hole.
Photo by Sincerely Media on Unsplash

As I’ve done before, faced with a new intersection of human behavior I don’t fully understand, I found myself down another research rabbit hole. Like many others right now, I find myself racing to define a Conversational Design practice.

A screenshot of Ozmo’s Self Support Assistant — showing a conversational interface.
Image credit: screenshot of Ozmo’s Self Support Assistant

Working on conversational interfaces in the tech support space, I’ve noticed some things I’d consider peculiar. On one hand, I see consistent research, data and opinions that most everyone despises chatbots. On the other hand, we’re experiencing an advent of conversational experiences that seem to be ballooning in presence and variation.

What perplexes me is that nearly every AI thing is interacted with via conversation of some kind. But while some versions of conversational experiences have been disparaged — why are we so willing to accept and try out others? What’s happening right now psychologically where humans are both actively adopting and accepting certain conversational experiences and hating others at the same time?

So I began my journey down the research rabbit hole.

Marketing image of the Pet Rock product.
Creator: Al Freni | Credit: The LIFE Images Collection/Getty

Willingness: The initial mindset

The novelty of something new is exciting for most folks — I get that. And to be fair, Chatbots in their more traditional form have been with us for some time. My assumption here was that traditional Chatbots have notoriously built up a bad reputation over time. But, as with any assumption, I knew I needed to dig deeper. I started with attempting to investigate our willingness.

My research began to reveal that humans are generally willing to try AI experiences, and even more broadly — new technology — for a combination of reasons, including perceived usefulness, efficiency, enjoyment, and a natural human inclination to experiment with shiny new things.

To summarize, here are some of the key reasons and ideas I encountered as to why humans are more likely willing to try newer technologies, especially AI:

  • Novelty and curiosity: Humans are often drawn to new technologies and the potential they hold for progress and improvement.
  • Social influence: Observing others adopting and benefiting from AI can encourage further exploration and acceptance.
  • Efficiency and automation: AI can automate mundane and repetitive tasks, freeing up human time and energy for more complex or enjoyable activities.
  • Enhanced capabilities: AI can perform tasks that humans find difficult or time-consuming, such as processing large datasets, identifying complex patterns, and generating new content.
  • Improved experiences: AI can personalize interactions, provide targeted recommendations, and offer convenient, 24/7 support in areas like customer service and digital assistance.

The concept of Social influence makes total sense to me — humans have historically relied on word-of-mouth to inform their decision making processes. AI is no different — we’re currently bombarded with personal stories from colleagues, friends and family of how they’re using AI.

Thinking about additional key concepts, efficiency and automation are the big promises of AI. And also where some of the fears stem from — imagine seeing your skillsets automated leaving you effectively redundant.

Screenshot of the “summarize” feature now available within Google Docs
Screenshot of the “summarize” feature now available within Google Docs

Conceptually, both Enhanced Capabilities and Improved Experiences are showing up within existing software experiences more and more every day. Things like Google’s AI-enhanced search results or any of the “synthesis” capabilities being folded into nearly every product — “help me write” or “summarize this” types of patterns are quickly becoming commonplace.

What will be interesting across all of these is what sticks. I think many of us would agree we’re currently in a technological bubble. So out of these enhancements, improvements and novelties — what will become consistently useful?

I believe surviving conversational experiences will continue to fall into either of the following categories. Some of these examples will be more like Latent Needs as Jared Spool describes — akin to a heated steering wheel. In other words,

“I didn’t know I needed this — but now that I have it…”

Other example solutions will aim to solve existing problems facing human users. And the rest will become a casualty of the bubble as the novelty wears off and usefulness doesn’t persist.

At a high-level, it seems like it’s a suite of potential things that lead us to a place of willingness — the novelty of something new, the influence that both our peers and media have on us and of course — the promise of better.

Acceptance

I realize following a meandering path like this can be difficult — so I’ll try to keep us tethered. Remembering where I started at the beginning of this rabbit hole, I asked why might we be willing to use an AI-driven conversational interface as compared to the classic Chatbots many of us distaste. As I dug into the concept of willingness, I often encountered acceptance as a nearby, if not similar concept.

The primary distinction between acceptance and the previous one is that willingness is a short-term, initial mindset, while acceptance describes the sustained, actual use of a product over time, reflecting deeper integration and satisfaction. So now that I understand the promise of the new a bit better, what about this sustained relationship?

Acceptance is primarily defined as the behavioral intention or willingness to use, buy, or try a good or service.

It is a critical factor for the successful adoption and uptake of new technologies, particularly Artificial Intelligence. Low acceptance can lead to the disuse of resources, a surplus of unused devices, and a slowdown in technological innovation.

The research continued to portray how acceptance can be a conscious, personal choice, such as knowingly purchasing a device that contains AI, or it can also be an involuntary action, like unknowingly interacting with an AI chatbot that presents itself as a human customer service agent.

A swiss army style pocket knife — meant to highlight aspects of acceptance, such as perceived usefulness.
image credit: Photo by Alejandro Piñero Amerio on Unsplash

The key psychosocial factors that consistently predict a user’s acceptance of AI across various industries closely resembles willingness but with some variance towards perspective over time:

  • Perceived Usefulness / Performance Expectancy: This is the degree to which a person believes that using a particular technology will help them achieve their goals or be useful in their daily life. It is often the strongest positive predictor of the intention to use a new technology.
  • Perceived Ease of Use / Effort Expectancy: This refers to a user’s perception of how effortless a technology would be to use. Its influence can be weaker than perceived usefulness, especially as users become more familiar with technology in general.
  • Attitudes: A person’s attitude towards a technology is a frequently included variable that positively predicts their behavioral intention to use it.
  • Trust: This is the subjective attitude that allows a person to make a vulnerable decision, believing that a technology will achieve a desired goal. You can really start to see some symbiosis here — Trust in both the AI and its provider is a significant driving factor in acceptance.
  • Social Influence / Subjective Norms: This factor involves a person’s perception that significant others would approve or disapprove of their use of the technology. It is particularly relevant in industries with high levels of social contact.

Models for Assessing Acceptance

To understand and measure acceptance, I came across several research-theoretical models, and how they’ve been adapted specifically for contexts like AI.

  • Technology Acceptance Model: This is the most frequently used and flexible model for assessing technology acceptance. It posits that Perceived Usefulness and Perceived Ease of Use are the primary drivers of a user’s intention to use a technology.
  • Unified Theory of Acceptance and Use of Technology: This model integrates concepts from eight other theories, including TAM — Technology Acceptance Model. It suggests that performance expectancy, effort expectancy, social influence, and facilitating conditions predict behavioral intentions and usage.
  • AI Device Use Acceptance model: A more recent model developed specifically for AI, the AIDUA model proposes that users appraise AI devices in stages based on factors like social influence, hedonic motivation (perceived pleasure), and anthropomorphism (human-like qualities).

What I’ve found so far is that these psychological factors all seemingly overlap and interrelate. Willingness correlates to the novelty and promise of something new, while Acceptance speaks to how our expectations are met over time. I can start to see how these pieces fit together in perhaps a larger adoption journey.

The perceived promise gets folks through the doors and consistent outcomes matching expectations keeps them coming back.

Whew, here we are in the rabbit hole. I still feel like there’s more to understand about sustained use and what individual facets can help drive acceptance. What is it that makes us hesitant instead of willing as the conversational space in software continues to evolve? What brings us back over and over again?

Modern depiction of Microsoft’s classic virtual assistant — Clippy
image credit: Microsoft https://news.microsoft.com/pt-br/aproveite-a-nostalgia-com-os-novos-planos-de-fundo-do-microsoft-teams/

My curiosity led me in deeper. My hypothesis at this point was taking shape around this intersection of humans and how or why they might trust software. I’ve personally become distrustful of classic Chatbots. If it even looks like the Chatbot of yesterday — seeing something like a Floating Action Button yielding a Clippy-like offering, promising to answer all my questions — I’m immediately prepared to never touch it.

Trust… actually Search first

Before I head into why and how trust is earned and eroded, I started by investigating perhaps the precursor of today’s conversational experience — Search.

A retro screenshot of Google’s homepage from circa 1998
Google Beta homepage (1998) Source: India.com

Traditional search experiences are of course now being woven together with AI intelligence. Outside of that, you can still see and sense the same disparity in appreciation of search experiences akin to classic Chatbots and modern Conversational Experiences. Some search experiences are highly utilized and accepted and others are not. Putting allegiances aside, I was curious about how existing with Search as a paradigm and evolving with it all these years could have potentially influenced us as we are redirected into more conversational interfaces.

Through observations with some early conversational interfaces I’ve been working on, I noted that users are still mostly beginning their conversation with succinctly few words, if not only one word — why?

Research revealed that users may start their searches with one word primarily due to the initial cognitive load and the iterative nature of search. I’d include familiarity as well, since search has been taking shape beside us for years now.

Explanations include:

  • Reducing cognitive load: Starting with a single term requires little effort. Users can quickly generate results and then adapt their query based on what they see.
  • Iterative process: Search is often exploratory. A single keyword acts as a starting point, giving users something to refine as they go.
  • System familiarity: Decades of exposure to search engines have trained users to trust that even minimal input will trigger auto-suggestions or relevant results.

Interestingly, while iterative searching does happen, only 15% of Google searches are refined or modified. This suggests to me that modern search engines are highly performant in that users often find satisfactory answers from their first attempt. Over time, this reliability has created a strong expectation that “search just works,” reinforcing user trust and shaping behavior.

Subsequently, certain search experiences and approaches have evolved with us and adapted to us to become highly effective.

In contrast — when currently working with an AI experience, users often feel as though they’re at fault for not achieving the outcome they’d hoped for. “It’s my fault I can’t get this AI to do what I want!” We should work to eliminate the returning concept of “User Error”.

This led me to some new questions:

  • How does the trust that users have with traditional search transfer (or fail to transfer) to conversational AI?
  • What design patterns might we use to help conversational AI feel as reliable and effortless as traditional search?
Human hand shaking a digital looking hand
Image credit: Adobe Stock

Back to Trust

As seen with the evolution of Search — it took time to reach a point where users rely on the pattern, usually without question. I personally feel very confident currently that I can input very little into a Google Search and then with the predictive suggestions, presentation of results and likely summary-answers — I’ll get what I’m looking for quickly and easily.

Unlike usability, which is immediately noticeable when something feels difficult or frustrating, trust operates subtly in the background.

Trust is built in small moments of reassurance and consistency, and it can be lost in a single misstep.

My journey across this research continued to affirm some things about trust. For example, in UX it isn’t just about security — it’s about predictability and transparency. Users trust products that behave in expected ways, where actions lead to anticipated outcomes. But trust doesn’t just matter at the beginning of an interaction; it needs to be reinforced throughout the user journey.

While the conceptual space surrounding how humans develop a trusting relationship with software experience is vast, here are a few notable concepts I can quickly summarize:

  • One of the most powerful factors influencing trust is familiarity bias — the tendency for people to trust what feels familiar.
  • Another key principle is cognitive fluency, which refers to how easily information is processed.
  • First impressions also shape long-term perceptions of trustworthiness. Research in psychology shows that people form lasting opinions within milliseconds of encountering a website or app.

This is where our willingness is gambled away as our anticipated outcomes aren’t realized and we’re left disappointed.

  • Another psychological factor is ambiguity aversion — the natural discomfort people feel when outcomes are unclear. If users aren’t sure what happens when they click a button, whether their payment went through, or how their data will be used, they experience hesitation. This seems like a facet of why many of us have grown negative towards traditional Chatbots — I know personally I’m never sure how versatile or capable they might be.
  • Emotional reassurance also influences trust. Users feel more confident when an interface acknowledges their concerns, provides timely feedback, and offers clear pathways forward.
  • Social proof is a psychological phenomenon where people conform to the actions of others under the assumption that those actions are indicative of the correct behavior. In UX, social proof can be leveraged to enhance trust through elements like reviews, testimonials, or user counts. Antithesis of this is Dark Patterns. Interestingly, this seems to correlate back to Social Influence in how we might adopt and accept software.

The answer to my initial question feels like its starting to take shape. Trust is clearly a big reason that some of us humans have become weary of traditional Chatbots and poor search experiences. We’ve been left stuck in ambiguity — “are you able to actually answer my question?” and then we’re subsequently and consistently disappointed with the quality of answer or help we sought.

A toy robot set in a dark background
Photo by Jochen van Wylick on Unsplash

Novelty, Risk & Familiarity

The research so far has shown how novelty can draw users to AI and new technological experiences. I found myself wondering if the relationship between novelty and willingness in this context were actually balanced. I quickly dug into how humans perceive risk around something new and novel to try and answer my new question.

According to my research, instead of a balance between novelty and willingness, the opposite is often true: novelty can increase perceived risk and trigger risk-averse behavior.

Here’s why:

  • Uncertainty and lack of information: Novelty inherently involves uncertainty, as the outcomes and potential risks of something new are not well-known or tested. This lack of information can be perceived as risky, prompting individuals to be cautious or avoid it altogether. This idea tends to be represented by a certain percentage of folks out there — still feeling hesitant or fearful of AI in general.
  • Unfamiliarity: Humans tend to be more wary of the unknown. A lack of familiarity with something new can heighten feelings of vulnerability or potential harm, leading to risk aversion.
  • Potential for negative consequences: Without prior experience, the potential for negative consequences with something new is often emphasized. Individuals tend to weigh potential losses more heavily than potential gains, making them more hesitant to engage with the unknown.
  • Lack of fluency: Things that are difficult to process or are again, unfamiliar, can be perceived as less safe.

All of this starts to feel like a razor’s edge of achieving sustained adoption. Initial novelty can undermine trust if expectations are unmet. Familiarity, through easy interfaces, builds trust, but repeated negative experiences (“familiarity betrayed”) destroy it, leading to disuse.

To build familiarity and reinforce trust, conversational interfaces must consider:

  • Novelty as an Entry Point: New AI experiences spark curiosity but risk disappointment if expectations aren’t met.
  • Prioritize Clarity and Empathy: Design for active listening, clear communication, and acknowledgment of user frustration.
An example screenshot where a Chatbot enabled human handoff
Image credit: https://www.gptbots.ai/blog/chat-bot-to-human-handoff
  • Continuously Improve and Manage Expectations: Regularly update AI performance and set realistic expectations for its capabilities. This might be obvious — but if we knew ahead of time that the experience we were about to engage in was limited in certain ways, we might not result in such disappointment.

Summary

This research journey has revealed much and of course, left me with even more questions and tangents to continue exploring. (Feel free to check out my collection of resources in this Google Notebook LM)

However, I think the answer to my original question, Why are we willing to try and use new AI experiences while many of us hold disdain for the previous era of less intelligent Chatbots is this =

  • Novelty and the promise of new and better has enthralled many of us to dispel our own aversions and give emerging AI experiences a shot.
  • The social influence and word of mouth surrounding us at work, in the media, amongst friends and family has reassured more of us that the risk is worth it.
  • The successful examples we experience and hear about help to contrast the earlier iterations of conversational experiences — these are not Chatbots.

As we all collectively move beyond novelty and into familiarity – our consistently successful experiences that match our expectations will yield trust, leaving us accepting the few remaining products that deliver on all fronts.

Next

One thing stood out across all this research — is the idea that effective communication practices can foster trust and acceptance. What a beautiful idea that every-day communication skills like Active Listening, Empathy, Clarity and Transparency all will likely embolden trust and drive adoption. That, however — is another rabbit hole.

Have you explored any similar psychological concepts in the AI Conversational space?

I’d love to hear your thoughts! Connect with me on LinkedIn.


The novelty and acceptance of Conversational AI was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.

 

This post first appeared on Read More