Potential Risks, Roles, and Responsibilities for UX and HCI in the Age of Artificial Intelligence
Potential Risks, Roles, and Responsibilities for UX and HCI in the Age of Artificial Intelligence

INTRODUCTION and PREMISE

Artificial Intelligence has arrived. Many computer scientists, machine learning experts, and long-time leaders in AI have suggested that human creativity, reasoning, cognition, and recall are now obsolete. A new age of machines, a new technological epoch is here. At the center of this phenomenon is the man-machine relationship, but completely reimagined from what we’re used to. 

Based on the observations and predictions of key UX influencers, it’s clear that the entire process, practice, and profession of UX and HCI are at the center of this transformation of civilization. The emergence of AI is all about the man-machine relationship, the human-computer experience. 

Is there a more fundamental science and discipline for AI than UX and HCI?

This raises important questions about the risks, opportunities, responsibilities, and potential new roles of UX in this new machine age. This paper is the first in an ongoing series of reports that explores the implications of these changes, based on research on the relationship between AI and HCI, interviews with UX leaders and practitioners, and capabilities of the developing technology.

I start with an analysis of the current state as the starting point, in early 2024. But all this will continue to accelerate. As we’ve seen with AI, the rate of acceleration will continue to increase. 

The research reveals these risks and opportunities for UX at this moment in AI history: 

  1. The entire paradigm of HCI and User Experience will be redefined as the man-machine relationship is redefined by AI. And if this is happening, how and when will businesses, the general public, and the UX profession encounter these changes? 
  2. Existing models for the creation, optimization, and use of user interfaces may be replaced by AI (more below), and therefore, demand for UXD and UXR will evolve or decline for these models: advertising, customer service, digital marketing, search, education, e-commerce, entertainment, e-medicine, and many others.
  3. Once we each have our own personal autonomous intelligent agents that know exactly who we are and what we want, it’s likely that eventually, all marketing, advertising, and any commercial, promotional, or influence-oriented ‘UX’ will become obsolete.  
  4. If we have those intelligent personal agents, the need to optimize ‘conversions’ of all types — a core role of UX up to now — goes away.
  5. At the speed that AI is advancing, any existing business knowledge, assumptions, and data (including market, customer, and user research data) becomes obsolete much faster and most ‘research’ will continue to have an ever-shorter half-life.
  6. The existence of UX as a practice and profession may decline because many user tasks and experiences that UX has always created, tested, and optimized will be done for us by intelligent agents. 

Additional Observations on Risks and Responsibilities 

The six above are the biggest factors, and the investigations have revealed more. Additional risks and shocks could include (and may have a greater impact than those on the short list above):

  1. UXR / UXD job displacement from AI is already happening, exacerbated by the tech downturn and over-hiring during the COVID pandemic:
    • If AI helps UX professionals become 5x to 10x more productive and creative, that eliminates up to 90% of UX jobs. 
    • As in many fields now, UX professionals and researchers need to become prompt engineers. 
  2. Competitive pressures increase for all UX entities, driven by new AI capabilities:
    • Aspirational UX companies, agencies, departments, teams, universities, and independent professionals are learning the near-term power of applying new AI tools and methods to the practice of UX. 
    • Some entities are waking up to the many long-term risks and opportunities described here and are retooling or restrategizing accordingly. 
    • Those UX entities embracing AI and looking at the long-term present significant competitive threats to the entities that don’t take the same initiative. 
    • Individuals and organizations in core UX professions are at risk from better-funded, forward-thinking, motivated, and innovative indirect ‘competitors’ that have access to better AI technology and vastly more computing power outside core UX fields and the research industry.
  3. Massive and unprecedented data collection, analysis, synthesis, recall, and insights are now possible with Large Language Models and rapidly advancing machine learning and analysis algorithms. These new capabilities are significant (for all humanity, in all fields) and represent research capabilities that extend far beyond prior human (UXR) capacity. 
  4. At this writing, ChatGPT can now see and analyze documents and interfaces, design web pages, program CSS/html, and automate many more tasks that were once the sole domain of the UX professions. These amazing capabilities advance almost daily.   
  5. End-end automation and integration of all phases of ‘traditional’ UXD and UXR is now a major phenomenon:
    • These new AI tools and platforms deliver new insights and productivity, which is both a boon and a risk to professionals.
    • They also highlight a ‘pave-the-cow-paths’ approach to UX and may distract from more serious, long-term risks (and opportunities) for the professional community and distract from true UX innovations possible with AI.
    • Some tools are difficult to learn and integrate with existing practices, and the user experiences can be challenging – despite being designed by and provided for user experience professionals. 

What are potential long-term, systemic shocks to ‘user experience’ in general?  

As AI drives the transformation of ‘users’ and ‘experiences’ as the defining nodes in the man-machine paradigm that has existed for centuries, how does the need for user experience design and research (and related roles) change? 

UX has always played the role of intermediary in that relationship between humans and the device:  

  • Machines and virtual interfaces of all types present operational choices and tasks to users, who then must learn and figure those out to be productive or successful. 
  • The UXD and UXR professions have learned what operational principles work best for humans and try to apply those to close any ‘usability gaps’ that exist. In this capacity, we’ve played the role of intermediary between the user and the machine, to remove barriers, friction, and other impediments to completing tasks and making machines work.  

What happens when those usability gaps go away because:  

  1. Users no longer need to do most of those operations themselves? 
  2. Smart interfaces learn to optimize any user operations that remain? 

Here’s a simple example:

Consider the user experience of visiting Wyoming in August 2017 to see the total solar eclipse. In that adventure, the user needed about ten different interfaces to manage the adventure: find scientific info and learn the science; find, reserve, pay for, and manage a campsite; locate restaurants; find gas and supplies in remote areas; find local events; map out travel routes; track the weather; etc.

All those experiences required different apps, websites, phone calls, or software. All the interfaces used to complete the necessary tasks, at some point, involved UXD and UXR professionals.

But in the future, a personal autonomous intelligent agent will simply ask, “I know you’re interested in the upcoming solar eclipse, would you like to arrange that?” And all those tasks will be done without direct user involvement. No need for ten separate tasks to complete, ten interfaces to navigate, and however many UX professionals to design, test, and optimize it all so users can make sense of it.

As ‘users’ and ‘experiences’ fade into these new intelligent, autonomous models, will we see the demise of all UX professionals as technology intermediaries? And if so, if UX doesn’t redefine its role and establish a new set of relevant responsibilities in a changed world, does ‘UX’ as we’ve known and practiced it risk becoming obsolete?

Potential Opportunities and Responsibilities 

UX can (must?) assume a new and unique and important role in this new age of intelligent machines. 

In fact, UX and all professions and crafts associated with HCI are uniquely positioned for this inevitable future – and we should vigorously accept that responsibility for our own sake and to help ensure AI delivers maximum promise with minimum risk and harm. Because HCI is fundamental and essential to the expansion and adoption of AI. 

But first, before we tackle our future challenges, UX and all HCI-related professions should take responsibility for our past culpability in many of the evils of the information age: 

  • Addiction to devices and virtual experiences, including the creation and optimization of interfaces specifically designed to manipulate brain chemistry to create those addictions. 
  • The well-documented corruption of our social fabric and our individual and collective emotional health and psyches from social media. 
  • The proliferation of the attention economy and turning people into commercial commodities. 
  • Abuses of personal data for the financial benefit of Big Tech. 
  • Rampant spread of misinformation by profit-driven algorithms and all the negative effects of that.
  • …and various other virtual experiences that act to the detriment of society. 

With AI, among various other risks, we face the evolution of the ‘attention economy’ into intimacy with machines. That’s a staggering new kind of HCI, a never-seen-before man-machine paradigm. UX can blindly enable that, as it did with the evils above, or do it differently this time.  

Any AI risk includes ways that UX can mitigate the risk – or exacerbate it because all AI is almost always a function of HCI. 

Potential UX ‘do it right’ contributions to AI and this new phase of humanity can include (but are not limited to): 

  • Break through the current AI UX dichotomy: AI is supposed to make life, tasks, and work easier and more productive, but many AI tools are complex, hard to learn, not intuitive, and are moving very fast. 
  • We need to go deep on investigating the contributions UX can make to the problem of AI/LLM hallucinations and trust. UX shouldn’t contribute to hallucinations and errors — so many tech interface failures in the past were due to stupid UX. Let’s avoid that. 
  • The right user experiences can help prevent ‘human counterfeiting.’ Build safeguards into all UX and AI user interfaces, to ensure that all human interactions with AI are somehow ‘watermarked’ to identify non-human interactions and avoid attempts to counterfeit humans. Some people will (and already do) choose intelligent interfaces and virtual experiences over interactions with real humans – but we need to ensure they do that consciously. 
  • UX research and methods can lead efforts in tracking how people are adapting to AI, what’s working, what’s failing, and where the biggest risks and opportunities are found in the interactions between AI and people. This is a key moment in history. Let’s record it. 
  • UX has a long history of identifying and avoiding ‘dark patterns’ (and in some unfortunate cases, using and promoting them). UX can help train the AI industry on existing dark patterns and how to prevent the creation of new ones – for AI will certainly discover its own devious approaches. 
  • We can provide tools and resources for all those developing open-source AI and using AI in the wild to ensure they can implement effective, high-integrity UX for AI. 
  • Eventually (and likely sooner than we expect) build self-aware UX into design tools and AI source code, as a mechanism to ensure quality and integrity. This of course is also a significant risk to the UX profession, as ‘automated autonomous UX’ ends most of the need for UX professionals. 
  • Model and provide our well-known business-focused, strategically-oriented, user-centric attention to learning what’s happening in the real world, with real people as jobs, agencies, companies, departments, and professions evolve. 

Again, all of these examples (and likely many others) highlight a key promise and premise: UX is uniquely positioned to make a significant contribution to our emerging technological future. 

In the short-term:

As AI advances and all these changes manifest in the coming months and years, traditional UX will likely continue for a time. The new will emerge from and exist alongside the current UX paradigm. During that transition, UX professionals can up their game by mastering: 

  • Prompt engineering 
  • Deep data analysis via massive LLMs 
  • End-end integration and application of new AI tools for UX 
  • Lead and facilitate how multi-modal AI that can ‘see’ UIs will be applied
  • Lead in the development/use of ‘intelligent UI design’ through ‘embedded UX’
  • Be the experts in the man-machine relationship with AI systems, well into the future

What’s next:

AI is already making a profound impact on the life of creators and the state of the Creator Economy. As all UX professionals are Creators, UX and the science of HCI may be the most revealing, compelling, and relevant window into the phenomena of AI.

This research and development will continue to:

  1. Investigate, validate, revise, and asses all the UX/HCI premises above, and more.
  2. Report on the ongoing evolution of the UX practice and profession going forward.
  3. Discover what ‘evolve or die’ means in UX now. We reinvent ourselves or the world will.
  4. Create ways to help the profession navigate this transition by leveraging the research into useful coaching, consulting, courses, content, cohorts, writing, speaking, publishing, and teaching.
  5. Participate in or run UX research projects with key partners to assess the evolving situation ‘on the ground.’

Leave a Reply

Your email address will not be published. Required fields are marked *