e6c0d516-50c8-4e25-b041-02e0a8c62a05_rw_1920.jpg

Lightning

Lightning is a mobile app and desktop plugin that provides greater transparency between the two teams that work inside The Home Depot stores in an effort to reduce inefficiencies. It surfaces the Merchandising Execution Team’s task schedule to department supervisors and store managers for more proactive planning and provides a direct channel to communicate with the Merchandising Execution Team’s supervisors.

 
 

Contributions

  • observations

  • interviews

  • surveys

  • information architecture

  • wireframing

  • usability testing

Methods

  • observations

  • semi-structured interviews

  • surveys

  • wireframing

  • usability testing

  • heuristic evaluation

Duration

3 month, Sept - Nov 2018

Team

Temi Moju-Igbene

Manasee Narvilkar

Aparna Ramesh


Background

We began with a directive from The Home Depot to look at ways that task management could be improved. They provided us with some problem areas that appeared to affect the efficiency of their in-store associates. From this information, we approached the project with the general question:


How can the end-to-end task management process at The Home Depot be made more efficient?


We started with a general exploration into task management in retail, looking into how it's dealt with, and conducted a brief competitive analysis of what systems existed.

User Research

Observations

As we explored the space generally, we also began early research in the stores for further context, doing observational work with a focus on how in-store associates interacted with one another and the customers so we might know how best to approach them as well as focus the topics we could ask about.

Semi-Structured Interviews

After the initial exploration into the space, we began conducting semi-structured interviews in the stores, pulling from associates working on the floor and asking for supervisors and managers as needed. In total, we spoke to 12 employees, part-time and full-time, including associates (7), a Merchandising Execution Team (MET) member (1), department supervisors (2), and associate store managers (ASMs) (2).

Given the exploratory nature of this part of the project, we felt semi-structured interviews were the best method as they would allow us a balance of depth and flexibility. It also allowed us to modify the interviews as needed since the store employees were still working as we conducted these interviews. For the focus of our questions, we had our overarching research question we got from our industry partner (The Home Depot), but we also wanted to discover what the in-store associates wanted, needed, and what pain and pleasure points exist around task management and execution.

For our interviews with the in-store associates and the MET worker, our goals were to learn the tasks they have; how they receive, manage, and report these tasks; and what tools they have available and how they interact with them for their jobs.

For our interviews with the department supervisors and associate store managers, we wanted to understand their responsibilities and their relationship to task management; how they fit into the chain of task creation and delegations; and what avenues they have for communicating with the corporate team for feedback about what's going on in-store and what kinds of support the corporate team could possibly provide.

Survey

In addition to the interviews, we conducted surveys to understand what activities associates spend their time on and with what frequency. We used the survey to validate our discoveries as well as learn more about the activities of the store associate. Our surveys were distributed in-store, on mobile and app devices.

Our goals for the survey were to learn:

  • how tasks overlap

  • the associates' usage for the First Phone and app usage

  • the supports they feel are available to them

  • if the pain points we identified with the interviews are ones they would identify for themselves

The results from the survey supported out initial analyses from the interviews and helped to focus our solution. Since we were present while these surveys were distributed, some participants would elaborate on some of their answers directly with us, which were noted and included in our analyses.

Affinity Mapping

LightningAffinity.png

We did affinity mapping to organize apparent themes from our interviews. We had two sessions, one focusing on our interviews with in-store associates and the other with supervisors and managers, as we found their needs and interests differed based on responsibilities of their roles.

Results

From our analyses, 9 major themes became apparent:

  • The customer comes first, but managing customer service along with other responsibilities can be challenging.

  • Most in-store associates believe the current tasking system works and like the manner in which they receive, complete and report tasks.

  • Most in-store associates use and like the tools available with the exception of a few that are slow.

  • In-store associates and MET associates have different jobs and responsibilities.

  • MET operates in silos. This has affects the work of the in-store associates and their supervisors.

  • Communication issues exist between the stores and corporate.

  • Supervisors and ASMs have autonomy over planning and task delegation.

    There is a feedback loop present and available between the stores corporate, but communication gaps still exist.

  • Supervisors and assistant managers have ideas on how certain processes can could be improved.

From these themes, we narrowed our focus down to one:

MET operates in silos.

User Needs

In this problem area, we found two sets of user needs.

For in-store associates, it was important they they know what tasks related to MET's activities they'd be doing so they could better plan their day and avoid doing redundant work (e.g. stocking a shelf that would be changed by MET shortly after).​​​​​​​

For supervisors and ASMs, it was important that they would be able to see what MET has scheduled and have a direct line of contact to the managers in charge of  the MET associates. This would is so they coordinate their own assignments and activities better as well as have a point of contact in cases where clarification is needed.

In the table below, supervisors and ASMs were grouped together under” Supervisor” as their needs were largely the same.

LightningUserNeeds.png

Design

Ideation

With our user needs and their implications in mind, we carried out a brainstorming session as a team, generating several ideas that might address the concerns stated above. Starting from the core problem area of MET visibility, we created three "how might we" questions for our two primary user groups (in-store associates and the department supervisors and associate store managers) as well as for the store overall.

How might we…

  • help associates track changes in their tasks better?

  • help supervisors feel more in control of their departments?

  • reduce inefficiencies caused by the separation of responsibilities?

Relating to these three questions, we generated ideas that we felt would address the problem and grouped them into three categories, each category encapsulating one of the “how might we” questions.

LightningHowMightWe.png

From these idea we conceptualized three design ideas, bringing together complimentary features.

Concept 1

A smart store tasking system was a concept that combined the following features:

  • Live tracking and map-based depiction of aisles, bays and store-level activity

  • Automatic inventory management system

  • Voice interface giving contextual suggestions and recommendations as the associates and supervisors walk the aisles

  • Convey the completion of tasks through a conversational voice interface

  • Real-time depiction of products on shelf based on computer vision and physical shelf sensors

This concept uses voice interfaces as a means of communication, which can be exclusionary to workers with hearing impairments. These feature are supplementary though so all the main functionalities of the design are not dependent on them. On the other hand, the inclusion of these features can also increase accessibility to those with vision impairments and mobility impairments as well since the wouldn’t be as reliant on screen and phone use.

At the locations we visited, we noticed a significant population of older associates as well as those with some apparent mobility issues and incorporating this technology could reduce the amount of movement in the stores and make their movements within the store more efficient.

LightningConcept1.png
LightningConcept2.png

Concept 2

A shared tasking platform which included a common task pool and an interface to view MET activity and generate tasks for supervisors and associates. The associates primarily do customer service, and pick tasks from the common task pool when the footfall is low, or based on urgency of task.

We incorporated some voice interfaces as a method of message sending. Similarly to the first concept, these are not core functionalities but instead supplement methods of communication already in place. Unlike the other two concepts, this one is more screen dependent so it should be designed to work well with screen readers and other assistive technology. The screens will also likely be very text-heavy. 

This product would be able to be integrated with extant apps and programs used by store employees on the First Phones and desktops.

Concept 3

A physical, interactive notification delivery system around the existing shelves of The Home Depot. Sensors and switches would be placed around aisles and bays and can be activated by store employees to provide visual indicators that the area needs attention by an employee.

Two of the main issues we would have to consider is the placement of the input receivers (buttons or touch points), the mobile dexterity required to activate them. The input receivers should also be placed in locations to reduce tampering by customers, but doing so may reduce some of their accessibility. We would also have to explore the different type of indicators and how we would code the different tasks. Initially we were thinking colored lights, but they would have to be differentiable to any associate who may have reduced color vision and likely combined with some other indicator, possibly light pulse pattern. This concept also relies heavily on visual cues, which can exclude workers with visual impairments. There might also be some issues of visibility if the indicators are obscured from a low vantage point, affecting people in wheelchairs or those of shorter stature, as well as being blocked by items around the area.

LightningConcept3.png

Validation

To settle on a direction and get more feedback, we talked to in-store associates and asked them for their overall impressions, feasibility of integration in current work flows, and features they feel would benefit the product. For each concept, we spoke with two employees (associates or department supervisors) from different departments and asked them to score specific features based on interest and provide additional thoughts as they came.

LightningInStoreConceptValidation.jpg

Feedback

After collecting responses (survey results, summarizations and direct quotations) from the store associates, we compiled and organized them by design concept. We pulled emergent themes and highlighted those which were shared among multiple participants or had strong reasonings behind them.

The participants who have been employed at HD for many years expressed that they noticed cases where tasks needed to be picked up voluntarily, it’s usually the same few employees volunteering. They recommended instead having the tasks be assigned instead; it would be more efficient and effective to have tasks assigned as associates will do something when asked, but generally don’t volunteer. It is also important that there be visual follow ups to completed tasks. Participants who were either department supervisors or associate managers stated that even though an associate completes a task, there still needs to be a visual check to make sure it was done correctly.

Participants raised concerns with some of our design concepts that had features which seemed to micromanage. Too much micromanaging could lead to resistance from store associates and reluctance to adopt some of these design features. We kept this in mind as we went into our prototyping.

Prototype

From the feedback we received, we decided to further develop our idea for shared tasking platform, taking into consideration what the associates have mentioned with this and the other concepts. We reduced some of the features we initially included based on general themes from earlier research as well as the feedback we received for the concept. Our prototype took the form of a mobile application as we knew a lot of tools were accessed with the First Phone, more people accessed tools via First Phones, and we were more familiar with how applications on the First Phone appeared versus desktop applications.

The Associate Store Managers (ASM), Supervisors, and Associates, all saw value in the shared tasking platform, but their scopes are different. We decided to make different features available to different job roles based on how critical they are to do their job. Accordingly, the user flows also would be different.

LightningFlow.png

For ASMs and supervisors, they would have visibility into the MET schedule. They could view past, ongoing and upcoming MET tasks by department. Based on a task, they could create new tasks for associates in their department. These tasks are assignable to a specific person, and supervisors could also give the task a priority and a due date. Additionally, associate tasks would be linked to the MET task they originated from.

LightningASMPaperFlow.png

In the their task list view, they would be able to see the tasks that are ongoing, need to be reviewed, and are closed. The needs review tab will allow them go check on the completion and quality of the task. We included this Needs Review tab as we received consistent feedback that visible checks were the only way to confirm a task was done up to standard. This is due to varying levels of experience of associates and varying definition of task completeness.

LightningSupPaperFlow.png

For associates, they would be able to view the list of tasks in their department. We decided to allow associates view all department tasks and not just those assigned to them to increase visibility. Also, it is very common for associates to complete each others tasks based on availability. In the app, when an associate completes a task, they can mark is as completed. This will let the supervisor know that task is available for review.

LightningAssocPaperFlow.png

Early Feedback

To get early feedback on this early prototype, we decided having our users Think Aloud as they were given tasks to carry out would give us rich data even though it placed a lot of cognitive load on our users and the results weren’t very organized due to limited moderation. We were aware of possible differences in feedback between users and were conscious about the disparities of details and items they did and didn’t talk about. After each Think Aloud, we asked participants a set of follow-up questions to get their overall impression of the solution and additional insight into the features.

Our goal with our first round of feedback was to refine the details of the solution and identify gaps in the design for each user group. In these feedback sessions, we noted down user reactions, expectations, concerns/confusions, and answers to our post-session questionnaire.​​​​​​​ These questions focused on eliciting explicit opinions on the features of our wireframe and the value of the information that was presented through it.

We had five participants test this prototype: two associates, one department supervisor, and two ASMs.

paper prototype.png



For our next prototype, we implemented the suggestions from our feedback session and create higher fidelity clickable screens based on the wireframes. Some of these changes were:

  • Updating copy with clearer wording and making disambiguating options more apparent. The main area where there was confusion was with task creation (creating a task for a MET associate vs. in-store associate) and marking a task as complete (“complete by” was interpreted as prompting a date rather than a name).

  • Changing the MET schedule view to default to chronologically list upcoming tasks rather than a week-wise view. Having the description visible and seeing the order of tasks is more important to the ASMs and supervisors.

We maintained the same user flows, expanding a few of the screens for a more immersive testing experience.

The task flow an ASM would follow during testing.

Evaluation

Heuristic Evaluation

To identify major usability issues with our design, we decided to run an expert evaluation with four experts (peers and local alumni from the Human Computer Interaction program and a UX designer from The Home Depot) and had them rate our system based on Nielsen’s 10 usability heuristics. Experts were given context into our system and what it did. We explained the 3 main flows: ASM, supervisor, and associate and then gave them our prototype to evaluate. A team member was present when the evaluations took place to provide the initial context and answer any questions they might have. We chose to do heuristic evaluation to find usability issues in our system that we could quickly and easily address and fix prior to creating a high-fidelity design.

Areas that required improvements were in Consistency and Standards as well as Error Prevention. The main issues revolved around navigation issues, dialog boxes not being as per standards, and actions buttons and UX copy being unclear.

Positive feedback included clear, minimalistic design with good use of white space and appropriate fields and level of information.

We took this feedback into account when compiling our list of design recommendations for a future iteration.

User Testing

We also administered Think Aloud tests with our users. Our goal was to identify what features each user group expected and liked and how “intuitive” the system design was for in-store use based on current practices.

For the Think Aloud, we gave our participants a brief overview of the capabilities of the system and what functionalities they’d have access to. From there, we allowed them to explore the wireframe with minimal direction, providing some guidance when it seemed they weren’t making progress into the functionalities present. We first wanted to get a general impression of what the users saw and thought. Overall, we wanted to know if the system followed established mental models so they’d be able to navigate without our help.

image.jpg

Different user groups had different flows. The ASMs and department supervisors has very similar task flows in our wireframe. The functions they have access to vary, but these differences weren’t fully present in this mock-up.

The ASMs and department supervisors first task comprised of looking to see the schedule of MET’s activities. We wanted to gauge the level of interest in this information and what level of granularity and presentation would best match their needs.

The second flow was to create a task for an in-store associate based on a MET task that was in the schedule. Here we wanted to get insight into whether or not this is a feature they would find useful and if the information presented was enough.

The last flow we had them go through was looking at the created tasks for in-store associates. Again, we wanted to gain insight into how the users would understand the information presented to them and if the level of information present was helpful.

For in-store associates, their functionalities are much more limited. Their views were limited to the tasks created for them by either the ASMs or department supervisors. Their flow was to view a task assigned to them and mark it as completed. Our goals for testing the flow was to see if the flow was understandable and easy to go through. In some earlier prototypes there was confusion over the UX copy and we wanted to see if that was still an issue after some iteration.

After the Think Aloud tests, we asked participants a set of follow-up questions to get their overall impression of the solution and additional insight into the features.

Results

The findings we gathered from ASMs and supervisors was that they understood and liked the core functionality of the prototype—being able to see the MET schedule. ASMs also found value in being able to give feedback to MET supervisors. We received feedback regarding the presentation of the schedule, some users preferring more granularity on upcoming tasks and generally having descriptions of the tasks viewable from the main schedule view.

For associates, they expected to only see tasks assigned to them rather than the entire pool of tasks in the department. They also mentioned wanting more details in the list view so that they wouldn’t have to navigate to a detail page unless necessary.

Our users also had some difficulty discerning what was an interactable item and what wasn’t, and some confusion around the UX copy.


System Usability Scale

We decided to use SUS to gauge the usability of our system. It’s a relatively quick way to get a measure on usability. It does lack context and explanation for the results, however, it doesn’t require the users to know a lot about UX or usability, and has simple and short response options. It also can be done quickly and as we were under time constraints it seemed to be a good option. We also had a relatively small sample, and this method is good in getting reliable results even when the sample size is small.

While we do get some insight into the overall measure of usability, we don’t get insight into what led our respondents to give the feedback they did; it is not a diagnostic method. We’re unable to make any assumptions on what worked well and what didn’t just based on this method. More research and questioning would be required for us to understand which features were well received and understood and which weren’t. This is why the SUS was supplemented with the Think Aloud user test.

To calculate the SUS scores, we followed the standard practice. The scores were then averaged among all users to get an overall impression of how the system rated. The scores were generally pretty good, most falling into an above average ranking but with obvious room for improvement. The overall average score was 79.58, with adjusted scores ranging from 62.5 to 92.5.

Design Recommendations

Based on feedback we received with the clickable prototype from our user group and experts, there are changes we would like to make going forward.

  • Modify copy to be clearer and more consistent with The Home Depot’s internal branding and voice.

    • Having access to styles guides or working with their UX writers would help in this regard.

  • Intentionally design our product to be more consistent visually and operationally.

    • We could leverage established mental models to reduce onboarding and learning times.

  • Remove the task assignment feature for store associates and removing them from the core user group, instead focusing primarily on building communication and a relationship between managers and supervisors and MET.

    • The task assignment feature was the one that produced the most confusion for our users and given how task assignments currently function in the stores it may not be an effective addition as this time.

  • Clearer visual hierarchies and distinctions between interactables and non-interactales.

  • Address issues present around accessibility, error prevention, and standards around navigation, modals, and dialog boxes.