OPen

Zurich / Tallinn

Thesis

2024

A Swiss Case Study on
Legal AI Design

Role

UX Designer & Researcher
Team Lead Design

Responsibilities

Project Management
UX Research
UX Design

Timeline

6 months

Overview

This research was carried out as a part of the MSc Interaction Design thesis at Tallinn University and Cyprus University of Technology. It explored the integration of artificial intelligence in  B2C legal technology, focusing on comprehension, trust, and decision-making.

Following an iterative, human-centered design approach, the study engaged both laypeople and legal experts. Through prototype testing and expert reviews, the research produced design recommendations that focus on enhancing transparency and accessibility in legal AI tools.

Problem

While rapid AI advancements offer the potential to transform global access to justice, it also poses significant risks in our increasingly tech-dependent society.

Despite AI's growing presence in legal B2C tools, human-centered design is rarely prioritized in their development. This has raised concerns among legal professionals and laypeople about the trustworthiness of AI. It has also highlighted potential risks associated with relying on AI-generated advice for legal decision-making.

Research Objectives

1
To understand the needs of legal experts and layperson users as well as their perspectives on the use of legal AI tools.
2
To investigate how users comprehend the outputs of legal AI tools and how this affects their decision-making and reasoning.
3
To improve the design of legal AI tools based on user requirements and legal expert opinions.

Methodology

The overall research methodology incorporated three phases and resulted in a list of design recommendations.
Each phase built on the previous one, deepening the understanding of the problem and refining earlier insights.

Click on a step to jump to the section.

Phase 1

Understanding

A scoping study was carried out that helped in understanding the context and previous research.
The three following themes were of particular interest during the first phase of the study.
‘Legal Tech’
The concept of technology-based legal services and is equivalent to ‘legal technology’.
Access to Justice
People are reluctant to seek legal assistance in the event of a dispute due to the fear of costs.
Trust in Technology
At-risk interactions caused by ambiguity and poor design affect the trustworthiness of a system.

Stakeholder Requirements

To understand user needs and risk awareness affecting AI tool perception, views were explored from laypeople and legal experts.

Click to expand the research steps.
Layperson
Research method
Views on legal AI
Legal expert
Research method
Views on legal AI

Personas

The first phase  phase resulted in the creation of three personas: the legal expert, the over-trusting layperson and the skeptical layperson.
Slide through the final personas below.

Phase 2

The second phase of this case study aimed to explore how end-users interpret the output of a B2C legal AI tool and how it influences their decision-making and reasoning.
It also evaluated the tool's perceived usefulness, ease of use, and the level of trust users place in AI systems for legal assistance.

Design Solutions

A low-fidelity prototype of was developed using insights from user personas and findings from the previous phase.
The prototype included features such as a language complexity selector, legal reference sources and an option to contact a nearby lawyer.

Evaluation

The study used mixed methods, including task-based evaluation, Likert-scale feedback, and a think-aloud approach to analyze user interactions.
Participants completed the System Usability Scale (SUS) and Human-Computer Trust Scale, followed by a semi-structured interview.

Using a low-fidelity prototype, participants role-played as Swiss dressmakers asking an AI chatbot two legal questions.
They rated the chatbot's responses on competence, complexity, and likelihood of following its advice, comparing simple and complex legal queries:

Evaluation Results

Three overarching themes emerged from the thematic analysis of the data gathered:
Complexity & Understanding
Users found legal jargon overwhelming and preferred simpler default settings.
“I am happy to see that it encapsulated the complexity of the topic. But maybe it would have been nicer not to expose me to this complexity, but rather ask questions to actually simplify it at the end.”
(User A)
Output Structure & Clarity
Users struggled with large blocks of text and suggested breaking information into smaller parts.
“I have an attention problem; my (attention) span is not so long, and when it is just a paragraph, I am getting lost. I would like to say, OK, this part is bold, which is the important part, or to have bullet points.”
(
User B)
Trust & Decision-Making
Users expressed concerns about the credibility and accuracy of AI-generated responses.
“Being more transparent about where this model is trained. I would think - which documents is it trained on? What is the filtering like? How do people, for example, decide which websites or data to train it on?”
(User C)

SUS & Trust Scores

The System Usability Scale (SUS) and Human-Computer Trust Scale (HCTS) provided the following results:
SUS
The evaluated prototype achieved an average SUS score of 81.6, indicating acceptable usability as reported by study participants.
More about SUS
Trust
Overall trust in legal AI chatbots was 60%, reflecting a low trust level. Participants with higher confidence in legal language reported lower trust in the chatbot.
More about HCTS

Phase 3

Expert Review

The following expert review was conducted in two parts.
First, based on the insights from the previous study, a co-design workshop session with a group of designers and legal experts was carried out followed by a domain expert interview with a Swiss legal tech expert.
Workshop
The co-creation workshop combined UX design methods with legal expertise to develop human-centered design solutions, informed by findings from the previous layperson user study.
Domain Expert
A discussion with a Swiss legal tech expert about legal tech status quo, access to justice, legal design thinking, legal chatbots and actionable legal advice from AI tools.

Key Findings

Highlights

Needs & perceived risk factors
Users expressed concerns about inaccurate AI outputs and data security, highlighting the need for transparency, user control over outputs, and adaptable language complexity.
Output comprehention
The comprehension of legal AI outputs depends on response complexity, including factors like structure and reference linking, which impact readability and accessibility.
Decision-making & reasoning
Users’ understanding of legal AI outputs significantly affects their decision-making and reasoning. Low trust, high-risk perception, and the need to verify responses reflect their cautious approach to relying on AI advice.
Design’s Role in Legal AI
Multidisciplinary teams and legal design thinking are essential to bridging the gap between users and legal experts. Human-centered design improves complexity, comprehension, and output structure, enhancing legal AI tools' effectiveness.

Design Recommendations

Key findings from the research informed four key design recommendations to guide human-centered B2C legal AI development.
Insights from the studies were translated into a high-fidelity design solution:
1. Personalize the user experience from the get-go
Personalize the experience by letting users select their expertise level during onboarding, adjusting language complexity, and clearly explaining AI functionality and data management.
2. Use hidden prompting to understand the user’s problem
Use hidden prompting to collect detailed information, ensuring AI outputs are accurate by asking targeted questions during onboarding or after input, like selecting a location for relevant legislation or rephrasing queries to confirm understanding.
3. Create an output that is clearly structured and interactive
Design a clear, interactive output with an executive summary, detailed breakdowns, linked legal sources, and hover definitions for legal terms to enhance comprehension and support decision-making.
4. Help the user by providing more options during an interaction
Offer users more options by providing clarifications with examples, next steps like learning more, creating documents, or contacting a legal expert, and tools for common issues like flight compensation claims.

Limitations & Future Steps

Limitations
Click to expand each.
Sampling Methods
Methodology and Instruments
AI Prototype Evaluation
Future Research
Click to expand each.
Iterative Cycle
Trust and Legal Jargon
AI-Specific Evaluation
Broader Context
Use this form to get in touch. I will get back to you as soon as possible.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.