Thoughts on Prototyping
Communicating concepts, testing, and real data.
It doesn’t matter if it’s Framer, Adobe XD, or some other tool. We’re ultimately trying to do three things. Create designs that can be tested, communicate concepts to our team, and create designs that are as close as possible to their eventual engineering implementation. Also known as, ‘the real thing.’
If a picture is worth a thousand words, a prototype is worth 1000 meetings. – Tom & David Kelley, IDEO
Framer has been a critical part of my design process for about three years. This form of high fidelity prototyping has been valuable for several reasons. On a personal level, due to its original CoffeeScript system, it forced me to improved my logic and overall ability to write code. It also taught me to think about each state of an interaction, which ultimately made the engineering hand-off stage much more effective.
Being able to walk through prototypes in-person, as well as share video clips and links to interactive designs is extremely helpful for addressing various states of these design interactions.
Beyond my personal development (learning to code quickly, improving interaction designs, and the ability to communicate more effectively with engineers) there’s three key points I see that make it so valuable in the product design workflow. Particularly, in the context of working within a large team.
Test Using Your Competitors
Before I get into the details below, it's worth mentioning that sometimes, it makes more sense and saves potentially days of time to have users test designs that already exist in the world. At Adobe, we sometimes had users test features that had already been built on competitors' sites, like iStock and Pond5.
It can be tempting and fun to jump into creating a new prototype in Framer or Adobe XD right away, but sometimes you can eliminate having to produce a design artifact altogether. Run some tests using what your competitors have already built.
This is where we get real fancy. A highly valuable aspect of using Framer is being able to use real data from APIs. We can get away from designing with often unrealistic, beautiful content, and allow users to interact with data that would be shown on the live product.
Adobe Stock API Example
This is a an example of bringing real content from an API into a prototype. This is not a look at any particular project or feature. Rather, it's just a simple demonstration of what could be possible by combining powerful tools like an API and Framer.
The prototype below allows for a simple text query to be performed, as well as the ability to view the results at a larger size.
When testing with users or presenting to stakeholders for Adobe Stock, it was powerful to be able to enter any search query, for example, “coffee shop,” and have actual images from our site be displayed. Often this was critical in evaluating the success of designs for advanced search concepts. If a feature was clear to users, but ultimately provided lackluster content, we knew it was a better decision to hold-off on that implementation.
Having design prototypes that feel like the ‘real thing’ allows us to test with users more effectively. It removes user distractions (they don’t have to pretend – or do a bit less pretending) and it gets us closer to actionable testing results for iterating and improving designs. Often in usability testing, users thought they were interacting with a real site. The results from this usability testing is important for the next aspect, building consensus. But before that, let’s cover a caveat on usability testing.
The way you test and the kinds of questions you ask from the beginning are critical here, and we want to be conscious of bias and the nature of usability testing. By putting someone in a testing environment (like UserTesting.com), we are introducing an artificial context.
Someone placed in a scenario where they are being paid to complete a task for you is likely to finish it. They want you to get what you asked for. However, in natural contexts, a user might bail on your site, app, or landing page in a matter of seconds. Additionally, someone placed into a testing scenario will probably spend more time reading text and processing all the visuals than they normally would.
Finally, don't ask people, “Would you use this?”
There's a huge disconnect between what people say and what they actually do. Instead, you should focus on observing behaviors. If you're keen on asking speculative questions, try asking someone if they can share a specific example of how the product or functionality might have helped them in the past.
Running the test
All of this is to say that usability testing should be approached with caution and a critical perspective. There's knowledge to be gained from testing but it's important to thoughtfully consider what questions you're going to use. In general, my experience has shown that you're better of asking someone to complete a task.
Don't give people too much instruction. Sit back and watch what happens, while clarifying as needed. You also need to be critical of the results you see in the tests. Likely, you should have an additional person looking at the same test results to identify different interpretations of results. Luckily, I had a fantastic researcher with me during most of my three years working on Adobe Stock. Hi Zan and Rachel. 👋
At Adobe, I regularly teamed up with a researcher assigned to our team. Often these interactions involved sharing ideas on interactions to test, writing interview scripts, debating how to word certain questions without being leading, running tests and interviews, and ultimately evaluating results. Ultimately, our team valued bringing in a mix of data from usability testing while also consulting the product’s analytics data.
Lastly, I'll mention that you generally start to see patterns after testing with five to seven users. If something isn't working, it should become clear even after putting just a few people through it. After that, evaluate the results, try a different design approach, and test again.
A large part of the role of a designer is to build confidence and “sell” their design solution to stakeholders. Especially, when evolving an existing product within an organization that is not comfortable with making major product changes. In order to get things shipped, or even to get projects prioritized, designers need to present solutions and identify current problems effectively.
Video clips of users interacting with prototypes was a huge factor in not only validating decisions within our design team, but also in moving solutions forward with the product and engineering teams. It built team assurance when we had seen actual people successfully interact with a prototype that closely mimicked what the end result could be.
Our prototyping and tesing allowed us to more accurately see the direction we were headed and feel confident about the results that would follow. It also reinforced the message that our design team could provide true value to the business. In this case, valid user insights paired with vetted solutions.
Beyond using prototypes combined with testing as back-up for a proposed solution, it also helps for identifying issues that we have not known existed in the first place. A project might start with a focus on one area, while exposing a huge issue in another part of the user workflow that we weren't even looking at.
For example, during a usability test on Adobe Stock, we were testing the saving functionality, and discovered that several people were struggling with figuring out how to buy an image. That's a problem.
Our button for making a purchase on the site used the text “License” as a verb, but that was, understandably, not clear enough for people. We found that changing the copy to “Buy License” was much more effective. After finding this issue we were able to communicate the issue to our product team by sharing the video clips of users struggling to make purchases.
Designers can't just show up and make the call on product implementation decisions based on their creative instincts. We have to find evidence that support our claims and decisions. After identifying problems, we can effectively communicate them to stakeholders and get those projects prioritized.
What I learned
- UI prototyping
- Quick concepting and ideation
- Collaborating with researchers
- Usability testing, not ‘user testing’
- User interviews
- Identifying bias in testing (Don't ask, “Which design do you like more!?”)
- Building team consensus
- Presenting and defending design decisions
- More effective engineering hand-off