If you’re a Product Designer like me, you’ve probably been asked to design something “small” when you’ve got bigger things you want to attack. This thing might be an A/B test, or it might just be a small design request: polishing or adding something to an existing experience.
I’ll admit, I’ve rolled my eyes at these requests, and I’ve felt at times like I’m justified in that. These requests are sometimes shortsighted fixes for bigger problems. Sometimes they’re tests against things I know won’t result positively. Sometimes the direction is ignoring more systems-level experiences. Sometimes the result backs us into a corner by its success at a granular level without synthesizing into higher level application.
So basically, when the next person asks me if we can test the green button over grey, or to throw a lock on it, I’m pretty much going to pull my hair out. Admit it, you feel the same way sometimes.
Is it okay to push back on these things? We’re Product Designers. We have good instincts. Our design directions should always be right. Right?
Experimenting through small things gives us an opportunity to validate everything we think we know. At Facebook, we use both quantitative and qualitative data to point us in a general direction, but we never really know how an experiment is going to go until we put something out into the wild. Our hunches aren’t enough.
Use small things as experiments, ways to learn more about that big thing you want to attack.
I know. Sometimes it’s frustrating to take the small steps when the big steps are obvious. It’s time consuming, and it doesn’t always allow us to exhibit our expertise as master designers. The silver lining, however, is that designing for the small things paves the way for the big things — the crazy ideas, the big bets — because all the angles have been explored, challenged, and vetted.
In order to do big things, you must do the small things first.
Learning to do the little things has been an exercise in growth for me. I’ve learned to move quickly, and I’ve learned to design for real people instead of myself. I’ve learned that by supporting my cross-functional teammates, we communicate and collaborate better. I’ve learned that in order to tell a compelling product story, supporting details are a must. I’ve learned that experimentation is foundational to creating thoughtful and meaningful experiences.
Here are a few key things that I’ve learned about designing for the small things.
You need a range of ideas to know on what and how to experiment. And everyone has ideas. Get them all. Designers, product managers, developers, engineers, content strategists, data scientists, researchers, marketers, even interns all have different and valuable perspective.
How do you get those ideas? Try categorizing generation methods into two different categories: passive and active. Passive collection could include things like lists or online groups or forums. Active generation could include things like working sessions or brainstorms. Get into habits of both.
I work on the Privacy team at Facebook, and we work on a lot of the small things. Privacy’s hard; it’s a big, complicated space. There’s a lot to do. It’s not the sort of area where it’s often beneficial to make massive product changes, roll them out with big fanfare and announce, “Hey world! We changed your privacy entirely!” Many of you know how that works. Many of you know how that feels. In order to make thoughtful, meaningful, and useful changes to an experience, we first need to expose the entire landscape of what’s possible, determine the potential impact, and then give it a go and learn before we scale to the world.
I recently facilitated a brainstorm with the team to better understand what we might do to improve privacy. We started with a statement guide: “If we ____________, people would feel ____________ about privacy on Facebook.” The only very simple rule to this exercise was this: There are no bad ideas.
I kicked it off. My first suggestion? “If we showed people a photo of us, the Privacy team, people would feel like they had real people looking out for them and would feel better about privacy on Facebook.”
Okay, that’s kind of a silly idea. I’m pretty sure no one wants to see our mugs on a privacy communication. We’re an odd-looking bunch. But I suggested this for two reasons.
First, it allowed to me to get the ugly out of the way, and it exposed an underlying need that people have: to feel like someone real is looking out for them. Second, it paved the way for someone else to riff off of the theme.
Perhaps if we think that people want an advocate for their privacy, there’s another way to do that. Or maybe a teammate hears “photo” and has a spark about an idea for photo privacy. Or maybe there’s an awkward silence, and it goes nowhere, but it gives someone else the courage to speak up because their idea can’t possibly be worse than mine.
Which is totally fine. Because there are no bad ideas.
It’s really easy—especially as a Product Designer—to go with your gut, either from expertise or our own personal experience. It’s our job to know what works and what doesn’t in terms of interaction patterns and usability and delight. We exude confidence in that; it’s our area of proficiency. But let me remind you, we’re not always right.
When it comes to design experiments, you can never be too confident in what might move a metric or have impact. You might have an idea, and you might have a hunch, and you might have a very strong opinion, but you never really know for sure what the result might be until you test it.
And let’s be real, you might not really know the difference between your idea, your hunch, or your strong opinion. So let these—your assumptions (because that’s what they are) — be your guide, not your commander.
Add words back into your vocabulary like “possible” and “probable.” Replace “I’m pretty sure that…” with “My personal thought is that…” Make a distinction between the known and the unknown. Have a firm understanding of the difference between presumption and knowledge (and make sure your ego knows that too).
Which leads me to this: Use your knowledge. Use your findings from research inside and outside of your organization, from other experiments. Allow this knowledge to craft your assumptions. Seat those side by side with things you absolutely-one-hundred-percent-without-a-doubt know will never work, and test them. I guarantee you there will be a time when you are surprised, where your assumption was wrong, and that thing that was never going to work, worked. And if you’re smart, you’ll tuck that away in your knowledge bank for later.
I recently was pretty darned positive that an educational product we were promoting wouldn’t be successful. I came to that conclusion because we know that if you give people too much to read, they won’t read it. We know this from user research. We hear all the time that there’s “too much text” and that passages are “too long.” So when it came time to test this thing, I scoffed.
Guess what? I was wrong. This experiment tested positively. We discovered people do read under the right conditions. My assumption moving forward is that people will read educational content if it’s well produced and it’s about something they care about.
But who knows. I could be wrong.
You usually don’t get a silver bullet. You most certainly don’t get a box of them. If you work with a silver bullet mentality and put all your resources and energy into the one thing your team is super enthusiastic about and it doesn’t work out, it’s incredibly disappointing. No one wants to see teammates discouraged, to see ideas fail or abandoned and work scrapped. The easiest way to bounce back from this is to be producing at a quick pace and a steady cadence.
A successful team moves fast, and they do often leave a trail of failed experiments in their wake. But here’s the deal: quick experiments, even failed ones, are the best way to learn what works and what doesn’t. So when the first two things don’t work, try two more. And when that doesn’t work, try two more. And if those fail too, perhaps it’s time to approach something from another angle, but give it another two, and then two more.
Get comfortable with the fact that it might take ten tries to get a positive result from your experiment. It might take twenty or more.
Over the course of days and months and years, you’ll find that you’re no longer solely working on assumptions, but that both your knowledge base and the strength of your hypotheses are growing.
And oh, don’t forget to document this stuff for the next person to fill your shoes. I beg this of you. It’s just as important to keep a history of what didn’t work along with what did. Someone down the road is going to have the same idea, the same hypothesis, and you can either allow them to run the same test with the same conclusion, or you can help them frame and shape that idea into a more refined iteration. There are patterns and insights to be discovered in collective learning that shouldn’t be left by the wayside.
I think the hardest thing working as a designer in a experimental environment is letting go of the expectation of perfectly crafted and polished experiences. At Facebook, we consider quality experiences to be composed of three conjoined parts: they have clear value for people, are easy to use, and are at the highest level of craft. If you happen to be designing something small, ask yourself this: how would you prioritize these three parts?
I’m not suggesting you put shitty design out to the world. What I am suggesting is to focus on the value and usability up front. We already know there’s a likelihood of failure for experiments. Is it worth spending hours on polish? Experiments aren’t precious; they’re scrappy. Until one succeeds, and then you polish that thing until it shines.
There are two things that are helpful here, once you’ve lightened up on your perfectionism.
First, identify what kind of design work the task at hand needs. On my teams, we’ve set up a tagging system for the cross-functional team to identify the scope of design both for my own planning but also for expectation of design resource. They’re labeled “design-work” and “design-consult.”
“Design-work” is more traditional design, which might entail some research and strategy, sketching, direction generation and refinement, etc. There’s usually strong knowledge and validated hypotheses behind these. “Design-consult” is slightly different and is incredibly useful when the small things come up; they’re often used to validate assumptions or learn more about data patterns. These tasks won’t get full design support in the traditional sense. It’s probably too high cost for design time or simply a low priority. For these, I can address them in a number of ways: a five-minute mock, an in-person consultation, or reviewing design after development. By leveraging these consultation tools, it allows all of us to move fast, generate, iterate, and most importantly learn.
Second, this isn’t a get out of jail card; you are obligated to deliver high quality experiences, and yes, it happens very often that quick experimentation results in a fragmented user experience with less than desirable craft. You, dear designer, are responsible for this.
Designing quickly is an exercise in prioritizing. In order to arrive at the best experience, you must not only discover what works through testing; you must also synthesize the wins and repackage them into quality experiences.
How might you do this? You enter into a contract with your product and engineering team. You need to get their agreement that while you are supporting them to try these sometimes unpolished things, that they will support you in making them shine once they’ve proven their worth. You need to get room for post-design in the scope, and when it comes time for that, you need to work as fast to get it to a high quality level as they did to get it out the door as an experiment.
Here’s the deal: Small design is foundational to big design. Precise understanding gets you closer to meaningful impact. You will be a better designer and a better teammate if you master this sort of experiemental design and collaboration. We design for people. And it’s not just about getting more people in your experience or using your tool. It’s about providing the best experience possible once they’re there, which means you must test, iterate and improve. This is how you fulfill your product’s mission: be it offering the best service, facilitating meaningful discovery, or making the world more open and connected.
Designing incrementally, testing, applying learnings and repeating is a strategy that sets the bedrock for bigger bets, higher stakes endeavors, as well as new products and complete overhauls. Without designing for the small things, we aren’t able to understand, and without understanding, it’s incredibly challenging to have purposeful impact.
Ultimately, the small things are the big things. So take care to learn from them.