Data design and internal tooling

Challenge: Improve functionality and searchability to increase Data Modelers efficiency and the scalability of the tool to other lines of business (LOBs).

My role: As the Service Design Lead, I worked with our product partners to appropriately define the problem and the scope of research. I led the research discovery and facilities interviews across 10 users with unique jobs to be done. After synthesizing our findings, I worked with tech and product teams to identify and prioritize opportunities to define the MVP and created the prototypes for user testing.

My team: Service Design Lead (me), Product Partners, Tech Partners, UI Designer

Summary: Our data modeling team and product partners came to us with the ask to help them rethink the UX flow of their internal data modeling tool with the hopes it would become the one stop shop for data model input and output for all LOBs. Over 4 months we led research with 10 users of varying roles, uncovering critical user pain points around knowledge transfer and storage and inefficient communications that greatly slowed their processes. We identified and assessed solutions that could quickly build foundational research storage capability and worked with our tech teams to phase out commenting and tagging capabilities to bring communication and records into the tool itself for quicker and more reliable communication needs. We designed 2 net new pages and redesigned information hierarchy across the entirety of the experience.

Our Research Process

The product team had done initial research and designs and wanted design’s perspective on verifying the direction and UI of the tool as they looked to increase adoption to new users and potentially scale to new LOBs. While there were a few “no regrets” changes we could initially make, we wanted to better understand the role this tool could take on as we scaled and how it would impact the ecosystem. We held 10 interviews across a variety of users and levels and expertise to assess current state issues and needs, ideal state wishes, and day to day working to record other tooling and processes at play within the ecosystem.

We worked with product partners to identify each interview participant based on a series of parameters we had determined. These interviews were held over zoom with the inclusion of demos to better understand intuitive behavior through the tool and other needs within their workspace. We began these interviews with broad questions about general process and pain points and then dove deeper into integration needs and flow requirements

Each interview was recorded and lasted around 45 minutes with each individual. We then went back and used the recordings to write up each interview verbatim to ensure accuracy for synthesis. We translated each quote and nugget information onto a digital whiteboard color coded by interview and user role as a foundation for synthesis.

Facilitating an interview and demoing the current tool to assess pain points.

Our Findings

We then grouped the findings by theme and derived critical insights that speak to the overall mindset and behavioral drivers across the process and ecosystem. In our first set of themes that were based in cleaning up our current tool we found that users struggle to trust the data in the tool and have a hard time tracking and maintaining their research efforts. A lot of the distrust stems from the naming conventions of the metadata fields, statuses, and use cases as well as the heavy reliance on manual updates. However, the lack of research storage in the tool also builds the distrust because there is no record of why decisions around an attribute occurred and that the necessary research was actually completed.

Digital synthesis board with color coded findings grouped into pain point themes.

Our second group of themes focused on enhancing the experience by tackling issues around inefficient communications, search limitations, and ineffective hierarchy of information. Users had a hard time tracking communications and understanding the changes that have been made to an attribute since much of the conversations happen informally over slack or in meetings are therefore without any record. They also are unable to effectively find the right attribute with the stringent current search inputs and limited filter capabilities and once they find an attribute have a hard time sifting through all the information to find what they needed.

Current state blueprint by user mapping out each step of the process with related pain points.

Our third group of themes was around the ecosystem impact and needs. Users currently no transparency into what has been modeled or what still needs to be modeled and therefore spend a lot of time searching for each attribute individually to gain that understanding. This has an added effect when determining an entity hierarchy because its hard to compare to similar attributes as well as issues in accurately reporting the work done or still needed to do especially as they begin to scale to other LOBs. Additionally users currently work with a total of 8 other tools to complete the data modeling required within Modelhub which remains a significant hurdle for full adoption of the tool. Without integration or additional capabilities, users tend to lean toward the other tools they have been using rather than fully using Modelhub for the full experience. A lot of these tools are around case management and status tracking which is currently not able to be done in Modelhub though each user asked for that to be within the tool so they could easily know their assignments and the status of each attribute.

Parallel to our synthesis clustering, we built out a current state blueprint to better understand what the system was currently doing and the context of the pain points we were hearing through interviews. We built out the blueprint by each step of the overall process by user and the needs and issues they had at each step both on the front end and on the backend. The blueprint begins with the requesters research and intake request into the data modelers process of defining an attribute. Once an attribute is defined it goes into the approvals process that currently lives outside of the tool and relies heavily on spreadsheets for feedback and slack for notifications, assignments, and conversations. Once fully approved it goes to the Drools team to build out the drools rules before completion.

Our users

After assessing the current state process we took another look at the users we interviewed and the necessary roles we were seeing across the process. We noticed that both product and analysts had the same job to be done though they had different expertise to get the job done. We grouped these two users under “consumers” as that more accurately represents their relationship with the data itself. We then wanted to highlight how we could think about access by user and impact on each page by user to better understand the overall needs and track impact as we looked to build out our roadmap.

Our opportunities

We captured the opportunities and needs through the research and then brought tech and product partners into a brainstorming session to further push what the ideal state of the tool could and should be. We used a creative matrix to make sure we were consistently improving our main pain point areas of cleaner data, better communication, and better tracking and How Might We (HMW) questions to push our ideation forward into a visioning space.

We then prioritized the brainstormed opportunities and those from the interviews by level of effort and impact while keeping any associated risk or dependency in mind.

Our Roadmap

We verified our assessment assumptions with tech and began building out an opportunity tree to show case, when considering dependencies, we can begin to build to our ideal. Each sticky was color coded to align with which page it would impact to better assess level of effort of any initiative or “branch.” Anything that could improve the research process or approvals process was marked with a start to highlight efforts that could radically improve the major needs we heard.

From the tree, we determined two critical foundations that we highlighted for our MVP. The first was restructuring the search page to eventually allow for a full search capability that could include searching by API or consumer or producer. The second was to build out a research section for each attribute to provide a platform for commenting and tagging to enable full integration and broader adoption especially when considering approvers.

We mapped out the other opportunities in a next, then, later manner to further highlight the need to build on top of a strong foundation to eventually go behind critical needs and include user wishes that could bring delight to the whole experience.

The ideal process

To improve both trust and efficiency, the process needs to decrease the amount of manual processes and updates. We optimized the process at large and determined critical triggers for reach stage that could easily be automated. We identified both the frontend and backend needs and actions of each stage. Finally, we renamed the stages to be clear, general, and easily understood by any user especially if new so that clarity, relevance, understanding can be sustained when scaling the product.

the ideal Design

We built out low fidelity wireframes to showcase the overall flow and better assess what the MVP could and should look like to make sure we were tackling critical user needs and functionality.

Search: In order to clean up the search page and lessen the amount of information on it to increase intuitive behavior, there should be one search field that accepts either source attribute name or canonical attribute name. For further search refinement, there should be an additional filters button.

Filters: Upon clicking filters for further refinement, a modal would appear on the right-hand side. The filters will be checkboxes to enable multiple filters at once. Each of the filter options will be organized by priority usage with status as the first section.

Search Results: They will surface high level information pertaining to the attribute, its hierarchy, and the overall status and management of the attribute. This information will automatically update upon any change. The user will be able to star results, which will appear in their profile.

Research Tab: Users strongly asked for a research tab that they could store their research in, so they don’t have to go back and find it all each time they need it, and so they have a way to quickly reference it for any questions down the line. By providing a separate space for links and notes, it allows quick access to the links when needed, but also free form sections for any context that may help knowledge transfer.

Metadata Tab: Generally users really appreciated this page and the hierarchy of information. By creating a tab structure, it allows for this page to focus on the metadata and have each other section be reference when and if it is needed.

Drools Tab: Drools team has to scroll through everything each time they update their work. By allowing them quick access to their section with a tab, they can easily pick up where they left off without scrolling. Additionally, at the point where the attribute is with the Drools team the transformation tab can be the default page.

Request Intake: Currently, requesters do initial research and include that in their request, but because of a lack of formal knowledge transfer, the data modeler builds out the research from scratch. In allowing the requester edit access to the research tab, they are able to store their information that can then be modified or built off of by the modeler.

Profile: Users eventually want to be able to track their work in Modelhub. In the meantime, though, they at least want to easily find what they have been working on or collect the attributes that are ready for review in a meeting which can be done by populating starred attributes in a profile view.