SAP • Data Migration
Designing a Product Feature: Case Study
Illustration: Design Thinking process
SAP – Client
Advanced product feature – Scope
UX Designer – Position
1 month – Duration
The design team was practicing a form of Design Thinking process. Designers were using text-only Empathy Maps without personal details, leading ideation sessions within product teams and conducting unmoderated usability tests.
For this feature, the product manager was collecting feedback from in-house target groups. I was the only designer working on the feature, since there were more products than designers in the company. Here’s how we did it.
Steps 1 and 2: Empathize and Define
Contextual Inquiries
Image 1: Interview findings
I had prepared the script around the topics suggested by the Product Manager and conducted moderated interviews as voice calls with screen sharing. I’ve obtained recording permissions from all participants and recorded the sessions. Product manager has attended online calls but did not interfere until the structured part was completed.
I have interviewed three persons, one from each of our target groups: Support Specialist, Business Analyst and Implementation Engineer. Usually, output of an interview phase has been one Empathy Map per user. I had started gathering insights for error message improvements, but eventually we’ve decided we need to build the feature from scratch.
That decision has postponed the development, so I created transcripts of recorded calls for the future reference. Transcripts describe anonymous, real persons focused on specific goals. There are no personal details, just insights, pain points and ideas participants provided.
Empathy Maps
Image 2: Distilled information
We were using Empathy Maps to distill and organize information gathered during interviews. Whatever participants said about themselves went into the Think and Feel area, since one can never really know what a person is thinking or feeling. The top row was about the person and the environment, the middle row about the feature and the pressures, and the bottom row about Pains and Gains.
Say and Do section listed the role-specific circumstances and the activities associated with the feature. Given the context, this was all about the synchronization of tenants or their individual components. Pressures and expectations ended up in the Hear section, although there were no sounds involved.
We weren’t sure if this was the best way to do it, but these documents captured facts very well. However, they did not capture ideal scenarios participants described. I kept retelling those ideal scenarios to my coworkers during our meetings and sessions later in the process.
User Stories
Image 3: Story with derived sub-stories
During the process, the multiple sub-stories had emerged from the main User Story. Based on the feedback from the team and the users, the document has been rewritten and refined in several rounds of meetings and consultations. Since the Design Thinking is not a linear process, stories were added, improved or rewritten when necessary.
We were using application-specific roles instead of job positions, to keep things clear and simple. We have defined the realistic, measurable goals instead of the abstract ones. In this case, we were not able to estimate the duration of a migration process for multiple external environments, so we decided to set the phrase “from a single page” as the measurable goal.
The product manager continued to provide additional information from the users of the product. That's why user stories changed the most during the process. However, I have invested most of my time into the changes of the prototype that followed later.
Step 3: Ideate
Session 1: Local and Remote
Image 4: Horizontal sequence of steps on a whiteboard
Our ideation session took place in a local conference room and in an online meeting at the same time. The product manager has participated via a Zoom call, and we streamed the video of the conference room for him. I was moderating the session and writing down the ideas he was sending over. A Product Specialist, a Support Specialist and a Test Engineer were involved locally.
I had prepared a grid layout before the meeting. It was structured to show horizontal sequence of events in several rows, but all the sticky notes ended up in the task steps row. The question about the permissions caused a lot of speculation, so we knew we needed a Developer to sort it out.
In the end, it was all about transferring components or tenants from one environment to another. We needed another ideation session to elaborate the synchronization and transfer paradigm. We couldn’t bring in a Developer at that time, but we had to have one for the next brainstorming session.
Session 2: in a Cloud
Image 5: Vertical sequence of steps in a cloud file
The second session was both local and remote again, but this time the remote part was just a voice call. We already knew we would discuss task steps in detail. I had prepared a cloud document where participants could enter a vertical sequence of steps. We didn’t need a video stream or a whiteboard.
Working in a single document turned out to be very distracting, so I had to create additional cloud documents and send the new links to participants. After the meeting, I merged all the color-coded elements back into one file.
We’ve focused on the synchronization paradigm and listed a lot of ideas and requirements for the ideal synchronization process. A Lead Developer was answering open questions about the permissions and the log-in credentials for multiple tenants. During the discussion about the ideal scenarios, we decided to go beyond the initial error message improvements. By the end of the session, we were all willing to rebuild the feature from scratch.
Step 4: PROTOTYPE
Finding the Solutions
Image 6: Wireframe
We had conducted several design reviews and chosen the concept shown in the picture above. The screen served as a hub page and provided an overview of the available components. It gave the insights into the current selection. Users could easily drill down to the sub-component lists to update their selections.
I used Balsamiq to create the low-fidelity wireframes for the new concept. When there were no new concepts, I did not work in low fidelity. It was faster for me to reuse the existing high-fidelity elements and screens.
A search for solution does not have to be tied to a level of fidelity. Sometimes only I can understand the hand-drawn sketches I create for my own use.
Determining the Details
Image 7: Detailed mock-up
I have invested the most of my design time into the detailed visual specification and the layouts that serve the purpose of each screen. It took several rounds of discussions and design modifications to define this level of detail.
However, I did not have any influence on the visual appearance of the UI components and the branding. I strictly followed the style guidelines and the live online samples provided by the Director of UX.
I was the one that created the Sketch file with UI elements for our design team. Using the browser inspect feature, I revealed the CSS properties of existing UI elements and re-created them in Sketch to match the online specification.
Putting It Together
Image 8: Screen state
Initially, we have validated high-level interactions at our internal meetings. After that, the product manager has presented them to the implementation team that was performing migrations as a standard part of their job.
Based on the feedback we were receiving, I continued updating the presentation, modifying the contents of the screens and linking them together into a prototype with simple interactions and complex screens. The interactive presentation has grown to ninety screens.
If we wanted to formally test the functionality, we would have to split it into smaller modules and elaborate the interactions in more detail.
Step 5: Test
Nonstandard User testing
Image 9: Standard test scenarios, not used for this feature
The figure shows a couple of standard test scenarios I created for our regular testing process. We didn’t do it that way for this feature.
For the product manager, it did not make sense to test such a unique functionality with an anonymous user group. That's why he wanted to personally discuss it with the existing users. There were no formal tests or Likert scales. I didn’t write detailed test scenarios for the feature.
The product manager was presenting a latest version of the prototype to our target groups. They were providing a qualitative feedback we used either to improve the existing prototype, or to confirm we’re on the right path. During our regular meetings, the product manager was informing us about the user feedback and reactions to the prototype.
Improving and Retesting
Image 10: Chronological sequence above the hierarchical view
The proper visualization of events was the critical aspect of this feature, together with the improved content of the error messages. By abandoning the hierarchical view that was used for the nested components and arranging the events into a chronological sequence, I adjusted the visualization of the messages to the mental model of our users. It resulted in two levels of chronological lists: a top-level migration events list and a second-level system events list. The latter could be opened from a migration event or from a related component.
Once the team was satisfied with the results, I prepared a Zeplin specification and provided the necessary information for development team in their Jira ticket. Along with the instructions, I included the links to the Zeplin project and the Marvel prototype.
The Usual and the Unusual
Things That Were Unusual
This feature was a special case because of the very distinct set of functionalities. It would have been very hard to find anonymous users with experience in that kind of tenant migration. That’s why the product manager decided to test the feature internally.
The testing process was very informal, and we didn’t collect any test results for this feature. It was not the usual procedure for us.
The success criteria were:
1) Users from our target groups can understand the key screens.
2) User stories are supported with the designs.
Internal users have provided the feedback we needed. We planned to perform unmoderated usability tests of critical functionalities, but the feature grew into an epic. Once the epic gets divided into new features, we will have the opportunity to test the new features one by one.
Things That Were Usual
Usually, we were preparing test prototypes as smaller modules, up to 30 screens, with carefully elaborated interactions. We were conducting unmoderated usability tests with target groups on usertesting.com. We tested with internal participants only if they belonged to the desired target groups. The prototypes and the scenarios were the same in both cases.
We were validating our tests with internal subjects first, even if they didn’t belong to our target groups, to uncover basic flaws and inconsistencies in scripts and prototypes.
We used a seven-point Likert scale to measure two metrics in our tests:
1) How confident are you that you completed this scenario successfully?
2) How would you rate the overall ease of use for this scenario?
A single result below 4, or an average result below 5.25 meant we needed to improve something.
Things That Are Always the Same
First, we articulate the design challenge, as some call it in the double-diamond process. After that, we apply a certain process, but at the same time we learn as much as we can about the product and the way it is used. Then, we deliver design specifications. Later, during the development phase, we explain the design and negotiate compromises.
These activities are always the same for in-house, product designers, but none of us is quite sure where to put them in a process. That’s just one of the reasons we need a flexible process that can be adapted to circumstances when necessary.
And yet no process can make up for the lack of expertise. Only skilled individuals can deliver good results – whether we are designers or any other profession.
Quick Overview
of projects designed by Darko Antanasijevic
Asseco • Design language, Scoring web app home page