

Gallagher Re Client Profitability
UX/UI Case Study


Problem Statement
Gallagher Re are a reinsurance firm that provides financial protection to insurance companies. They handle insurance applications that are too large for insurance companies to handle on their own.
The previous Client Profitability platform was a tool used to assess the profitability of clients and understand why some are producing more profit than others. Execs and other members of the senior leadership found the tool to lack many capabilities needed to accurately analyse the profitability of clients/client groups or the management of resources. For the financial team, much of the process of analysing and extracting data was manual and users of the platform found the process to be laborious and inefficient.
As a rule, the business needed to be able to easily identify which clients are producing a profitability of 30%+ and revaluate those that are below this threshold to understand how this can be improved. They also needed to be able to receive insights into how resources are used across the business in order to identify potential areas for improved efficiency.
Due to the design of the previous platform, users highlighted the inability to compare data points across different variables (client groups, product, teams etc) which restricted the amount of actionable information that could be extracted in order to identify methods of improving profitability.
Stakeholder Interviews
The purpose of conducting stakeholder interviews was to gain an understanding of what was to be expected of the Client Profitability solution and how it would be delivered. Stakeholders were gathered and interviews were conducted with the purpose of agreeing on some high level deliverables, who would need to be involved and what technology would be leveraged.

Stakeholders outlined the issues faced with the previous platform and how it affected business decisions. Members of the financial team were also involved in the call and detailed the manual processes they endured due to a lack of functionality. Stakeholders expressed the importance of being able to identify high profitability clients to ensure the business is servicing them adequately in order to continue to nurture their growth. They also explained the necessity of identifying clients below the 30% threshold and the ability to understand why this might be the case. Stakeholders particularly mentioned the need to be able to see the resources working on a client in order to understand costs and perhaps how to create efficiencies.
User Goals
From the initial stakeholder calls a list of user goals were developed which detailed deliverables and expectations from the final solution. Many of the user goals were derived from a need to automate and simplify previously manual and laborious tasks, while others were new functionality with the aim of streamlining existing tasks and improving outputs.

How We Might
The stakeholder interviews uncovered not only a series of pain points but a number of opportunities to introduce helpful new features and dramatically improve the user experience. For this reason the How We Might technique was employed to flesh out some of these pain points and brainstorm some innovative new ways to overcome them.

User Interviews
User interviews were organised with the platforms most frequent users with the goal of understanding how the platform was most commonly used, what the pain points were and what users ultimately hoped to achieve when using the new solution. Each interview lasted an hour and questions were selected based around their experiences with the previous platform and their expectations from the new solution. The style of questioning was as open ended as possible to allow for unbiased and unhindered answers.


The interview feedback provided incredibly useful insights into the issues faced using the previous platform and allowed for a greater understanding of user needs from the perspectives of different personas. Feedback was able to be quantified in order to make decisions on priority and also highlighted unique improvements at low development cost.
Prioritisation Map
While feedback from the user interviews was vital in understanding expectations for the final solution, it presented the issue of what could be developed within the timeframe. Feedback a was a large compilation of 'must have' and 'nice to have' features and it was necessary to involve the dev team and a solution architect to understand how much could be achieved within resource capacity. Suggested functionality was cross referenced with business goals and stakeholder deliverables to understand its importance vs the build time which was illustrated using the Prioritisation Mapping technique.

User Persona
The development of user personas served as a vital reference point for ensuring user expectations were met and requirements were achieved. Information was taken directly from user interview feedback and included details like their job roles and day to day tasks which helped to understand why they would use the solution and what they would require from it. Key metrics and features were also collected as part of each user persona in order to understand what they expected to see when using the platform and the type of functionality they expected to be able to utilise.



User Journeys
While the user interviews proved to be an excellent tool in understanding the functionality expected from the new solution, mocking up the user journeys helped to determine how users would navigate to the proposed features. It was
The user interviews unearthed several issues with existing processes with some describing them as lengthy and complicated, resulting in a subpar user experience. Therefore, it was necessary to streamline and optimise a number of journeys to better fulfill their needs.



Information Architecture
Developing an IA diagram was a crucial element to providing a seamless and efficient navigational experience. A natural hierarchy of content was established: Business > clients that make up the business > staff that make up the costs to service the clients. Content was then divided based on where user personas would spend most of their time according to their tasks and user needs.
During the user interviews, all participants mentioned key metrics they would need to refer to on a regular basis and as result, would like to be able to easily access. From a business profitability perspective, these would include queries like results by revenue banding, results compared to last year and number of profitable clients by banding. Then there were less frequently accessed queries like results per BU, per Team etc.
For this reason, the business profitability section of the platform would be divided into two pages: ‘Overview’ and ‘All Business Results.’
The ‘Overview’ page would include visuals of the most commonly requested queries as mentioned above and would allow quick access to interpret, consume and download key metrics, while the ‘All Business Results’ page would provide a more granular view of key metrics along with a number of other useful but infrequently requested results.
The same approach would be taken for the ‘Client Profitability’ section of the platform with the difference being that the ‘All Clients’ page would effectively be a list of all clients/client groups along with their metrics. Users can filter and search as necessary to find the client and metric they are looking for.



Wireframes/Lo-fi Designs










User Testing
The purpose of conducting user testing was to evaluate the user experience and validate some assumptions made, particularly around some of the key user journeys. It was imperative that users were able to access important information as easily as possible, which meant that navigation throughout the platform had to be seamless and intuitive. Some of the other key features which required validating were the ability to filter data, download data, segment results and locate client financial information.
In order to achieve this, a test plan was created which included a list of questions along with tasks which represented similar tasks to those a user would perform. The participant was then asked follow up questions based on their experience, whether they felt it met their needs and whether or not improvements could be made.
Testing was carried out remotely via zoom and the testing approach was open ended questions and minimal interactions during usage so as to mimic an authentic user environment and gain reliable results. Feedback was collated into a testing results document and participant recommendations were categorised into cosmetic, moderate and critical in order to determine how feedback would be implemented.



Final Designs







