By: Drew Flanagan
The “Evaluate” phase has been tricky because it is very difficult to effectively assess your own design.
When sharing your design, many people respond very positively, especially those you are close with. Others, maybe colleagues or peers you are less close with, tend to focus heavily on the positives of the design so as to not come across as “too negative” or unwilling.
To combat this cycle, interestingly, my group has started soliciting anonymous feedback on our work from all members of the Duke community. This method of evaluation is helpful because community members are invested in improving the community, and also, via the online form we made, can be constructive with their feedback without feeling like they are being overly critical.
While the anonymous survey eliminated potential bias, another challenge of “Evaluation” has been making sure to ask the right questions. How do we develop questions for testers that get at our “how might we statement” (the crux of our design) rather than getting distracted with some of the accessories (such as technology used for implementation). Due to this challenge, we have geared our survey to focus specifically on our design and whether or not it meets an individual and community need rather than if it is aesthetically or technically sufficient.
Hearing critiques of our idea has not been easy, but our team has continued to welcome them. User-centered design requires this open dialogue. Though it can feel excessive or unnecessary initially, we always learn something from user testing, even if it’s just an affirmative of existing features or a critique of an aspect we know needs to be improved.
I think I’ve made the most impact in helping my team adapt to feedback and suggested changes. Often, we can get stuck when we are forced to consider a new aspect of our design or modify a feature completely. However, I am proud of my ability to continue to bring the team together to think thoroughly as we adapt to suggestions from users.