Main image of article Design With Outcomes: A Case for Objective Design Evaluation

I recently had a conversation with my past colleague and dear friend, Makenzie Guyer, a UX designer for Xbox at Microsoft. She had helped mentor a group of aspiring design graduates at her alma mater and felt somewhat bittersweet looking at their blue-sky, illustrative portfolios. 

“While these are some stunning creative works, they’re not going to get them a job,” she remarked. 

If there’s one thing we’ve both internalized since joining the tech industry after graduation, it is that good designs are result-oriented. Many undergraduate programs still gloss over this aspect, which results in curriculums that emphasize subjective critique over objective analysis, testing, and experimentation. 

The Macro Approach

As a designer, you will spend most of your career being critiqued. Your deliverables are visual-based, and people like to comment on how things look. Even in user experience design—a field whose methodology is founded in scientific methods—people still like to hypothesize and never validate: “The average user will notice this first” or, “As a user, I’d rather X than Y.” 

It is your job as a designer to ingest the feedback and assumptions thrown your way and filter them with data, test results, and credible evidence. You should recognize that critiques don’t determine good design—end results do. If you’re working on a team whose decisions are heavily based on critiques, which happens a lot in creative agencies, you should consider introducing a few quantitative methods to your crits. This advice applies to all areas of design as well, whether you’re creating logos, posters, banner ads, or entire applications and services.

In The Design of Everyday Things, Don Norman suggests, “A design that people do not purchase is a failed design, no matter how great the design team might consider it.” It is crucial to set tangible goals to measure the effectiveness of your work. Good metrics are based on business goals and product vision. In my experience, there are three types of metrics usually used to quantify impact:

  1. Business metrics: These tend to be revenue-driven and cover acquisition, adoption, retention, referral, revenue, and so on.
  2. Experience metrics: These measure the quality of the user experience. Examples include happiness, ease, task completion rate, retention, and more.
  3. Social impact metrics: These measure the societal, political, and environmental impact of a design. Check out the (S)TEEP framework.

Before you start drafting that first userflow, answer this question: “Given the finished product, how does it change existing outcomes?” Come back to this answer every time you find yourself debating the specifics of pixels, layouts, or interactions with someone. A lot of times, you’ll come to the conclusion that it doesn’t matter. Some other times, you’ll realize that this is something worth investigating, and you might devise a test plan. Either way, use this answer to navigate the process.

It is also worth noting that result-oriented design is not the enemy of creativity. If anything, you devise more creative solutions under the constraints of the outcomes set.

The Micro Approach

While quantitative data could be used to set larger goals for your design, it can also be used to pressure-test the effectiveness of smaller, immediate UIs. As product designers, how many times have you found that your original designs failed to handle a large volume of user input or a tricky edge case during implementation?

A quick scroll down Dribbble and you’ll find hundreds of eye-catching dashboard graphics. I call them graphics, not mockups, because they are not representative of real products. To design real applications is to know constraints, errors, and edge cases. How do we make the 80 percent case good and the 20 percent case not bad either?

By querying the usage of the product area at hand, we’re able to consider all possible scenarios of an interface before architecting it. UI scoping has become an important step in my process, for it allows me to design holistic, resilient experiences that work even when things go wrong. 

Don’t get me wrong, it is absolutely important for a design to go through reviews—this is where you gather intel from both designer and non-designer stakeholders that will aid in your process. However, traditions of critique-based decision-making in the design world have to change. Traditional graphic design programs, in particular, should shift their curriculums from critique-based to test-focused. The effectiveness of a logo or the placement of a button on the page should not be decided by your professor or your creative director. You should let the users decide.

Sydney Anh Mai is a Product Designer at Kickstarter.