- Perform a light heuristic/expert evaluation of your experience.
- Identify a specific usability or information architecture problem that you feel is important to solve and tell us why.
- Create a new design for that problem, showing us as much of your process and design evolution as possible from low to high fidelity.
- Describe a plan for how you might validate the efficacy of your proposed solution.
Prior to tackling this problem, I’d like to learn as much as possible about our persona (and the general needs of prospective marketing customers in general):
What matters most to this type of user? Do they typically shop around? Are we always being compared to other competitors? Do we understand how/why their product is chosen over ours? Is this type of user also a decision-maker/buyer (or influencer)? What is this user’s technical aptitude? Are they intimidated or confused by the complexity of SendGrid’s transactional offering?
I’d also like to gather as much existing data on user behavior within the website and the app itself:
- What is the typical customer journey (from acquisition, activation, retention, revenue, referral). Where do potential customers drop off in this process? Do we have feedback around why they leave or continue?
- Do users struggle with onboarding? Do we have specifics (so we can prioritize them with the product team)?
- Do we have customer feedback/requests that will lead to actual revenue (and don’t conflict with the overall product strategy?
The goal of gathering all of this data is to narrow our assumptions as best as possible. We want to understand the problems and needs of the user so that we can build meaningful solutions that lead to revenue – and of course, we want to provide an amazing customer experience (and treat it as a competitive advantage for our product).
For this review, I wanted to empathize with our marketing intern (who we’ll call ‘Meghan the Marketer’) as much as possible – What is her day-to-day like? Is she often asked to review products like this? How much influence does she have on the recommendation and purchas of products like ours? What is her technical aptitude? What does she care most about?
Here’s a quick recap of my experience:
- Sign Up - pleasant & effective
- Onboarding - helpful, yet time-consuming
- Dashboard - lots of features for transactional product (not relevant to our persona), could be overwhelming for her
Create email campaign / Template editor - easy to build emails, fun switch from visual to
- Send a test email - difficult to find and use
Overall, the process is a bit lengthier and more involved than Meghan might expect. She just wants to draft and send a test email as quickly as possible - if all goes well, she’ll keep digging and evaluate SendGrid further (this is a huge assumption).
In my case, reaching this simple goal took much longer than expected.
I’d love to look at some user data to understand the typical behavior here. Do others find this to be a similar stopping point? Where do they typically go from here? Are they mostly prepared to follow the setup guide? Or do they start seeking out specific features of the product?
Potential Focus Areas for Improvement
Below are some notes that I took during the UX review. Again, this is all based on my experience alone - both qualitative and quantitative data should be reviewed to help understand and prioritize design work.
- Introduce helpful empty-states (to encourage engagement and add a bit of personality and make the product more enjoyable)
- Separate the transactional and marketing-specific products entirely (likely a large, but potentially beneficial, effort)
- Provide editorial history within the template editor (or a more obvious way to undo actions)
- Improve usability of email preview actions
- Improve usability of send test email action
Problem → Solution
My hypothesis is that improving the usability of sending a test email will provide the greatest benefit to both Meghan and SendGrid.
To revisit the task - Meghan’s goal is to quickly sign up and take the product for a test drive. If she becomes intimidated or confused at any point in the process, she will likely go elsewhere and SendGrid will fail to gain a new customer.
Before building anything, I want to ensure that these assumptions are correct. If we build something that people don’t really care about (or has low impact), then it’s all a waste of time and we should’ve focused our efforts elsewhere.
But let’s keep going as though we’ve already validated this need. Here’s a user story to spell out the work a bit more specifically:
As a prospect, I want to quickly build and send a test email, so I can evaluate the product for continual use and/or purchase.
Presumably, I’ve already done quite a bit of research and can also gather enough data to make informed decisions on how to proceed.
Next, I want to shift into the following:
- Ideation/brainstorming (whiteboarding session with colleagues, stakeholders)
- Hypothesize potential solutions to test (refine output from the whiteboarding session into mid-to-high fidelity mockups)
- Build interactive prototypes to test on prospects and customers
- Test potential solutions to validate/refine/iterate
- Push our solutions to market and begin measuring customer data
- Monitor and iterate the solution as necessary
During the whiteboarding discussion, the team may notice that there’s an opportunity to improve the usability of the preview features as part of this solution. One concept is to add a ‘Send Test’ icon into the app’s header (alongside the preview options) and add a tooltip hover-state (perhaps during user testing we learn that there’s pretty low recognition for the fact that those icons actual represent preview actions).
A high-fidelity mock of this concept might look something like this:
A potential solution where we consider adding an icon next to the other Preview actions
We also explore a concept to remove the preview icons altogether, and instead opt for a simple ‘Preview and Test’ dropdown in the top navigation. This would allow room to add some helper text that might help users quickly understand the preview actions AND quickly send a test email.
We then discuss trade-offs that may come with this solution, including the effort required from the engineering team and the additional user action required to select a preview mode (hover/click). Ultimately, we decide to move forward with ‘Preview and Test’ concept.
Since we have an existing styleguide, we can quickly build high-fidelity mockups and a working prototype to test. The workflow includes:
- Main view (new navigation element)
- Dropdown menu
- Email modal
- Success message
A simple, interactive prototype to demonstrate the user flow
As mentioned above, we’ll need to validate that these solutions are provide a great experience for our users:
- Is the solution effective? (does it solve the problem?)
- Is it easy to use? (is there room for improvement?)
- Is it enjoyable? (does/should it create an emotional response?)
- Does it have buy-in and support from the team and relevant stakeholders?
Always Be Testing
The best way to answer the above questions is with user testing. For this particular problem/solution, the process might look like this:
- To quickly replicate the ‘first time user’ experience, we can draft usability tests on sites like UserTesting.com. Within a matter of hours we can better understand user expectations and answer some of the ‘is this a great experience?’ questions. We’ll also want to specify screening critera for participants who closely match our marketing persona.
- Conduct additional in-person testing with prospective customers, stakeholders, or people in our professional network.
- Gather feedback from existing customers (work with account management team or users who have opted in for user testing feedback) to understand how this will affect their current experience and email marketing workflow.
Our results will help determine what to do next. What did we learn? Are we ready to ship this solution? Do we need to refine the experience? Does this solution present any new problems? We can run through this same feedback loop until we’re confident that the solution is ready for market.
Working with Product & Engineering
As with any design project, communication and collaboration are critical to the entire process. We need to share learning with the team early and often - sharing an understanding of customer needs, problems, constraints, and potential solutions. We also want to provide a sense of shared ownership for the product design and encompassing customer experience.
So how do we do this?
- If possible, assemble a cross-functional team to participate in the design process.
- Expose the team to the user research process (test results, insights, feedback).
- Focus on outcomes, not output (communicate the why behind design)
- Distill any learning/findings into simple, sharable content (one-pagers, brief presentations, user testing video/audio feedback, data summaries, etc).
- Provide deliverables that capture the intended user experience and limit gaps between design and development.
- Instill confidence in the engineering team by understanding their workflow, limitations, and expectations at all stages of the development process.
What if there’s a time crunch?
We never want to release a bad experience, but it’s often better to release a ‘good enough’ feature quickly (so you can learn and iterate from real customer data) than to wait for perfection.
However, if there’s a strong chance that this feature will not receive any investment from engineering once it’s in production, then we’ll want to do whatever we can to ensure that our solution provides a great experience.
How do we ensure the solution is successful?
Set benchmarks. If there’s no relevant data before launching the feature, then it’s going to be more difficult to understand whether or not it’s successful.
Some potential metrics to consider throughout this effort:
- Retention rate (are users more likely to continue to use the product?)
- Conversion rate (are users more likely to pay for the product?
- Reduction in operational costs (does our solution reduce custmer service issues?)
- Engagement (how much time do users spend onboarding and using the product?)
- Change to NPS (or any other metric that quantifies the customer experience)
- In the above concept, I used the same ‘paper plane’ icon for the send test action (which is already used for the send campaign action). This might confuse some users, and we may want to consider replacing it with something else (e.g. an ‘email envelope’ icon…or maybe use text alone). In either case, we should be able to identify the best path through test feedback.
I’d love to hear your thoughts on this project. Please feel free to contact me if you have any additional ideas or questions.