Nudge
Instructor-driven student motivation and engagement software
An instructor and student-facing software, Nudge was designed to increase student engagement with materials. My team was brought in after development to provide unbiased user research.

Design Case
Role: Design Researcher / UX Researcher
Goal / Problem
In this project, my team was tasked with performing pre-scaling design research and redesign of an existing student notifications platform used by a small test team at Penn State University.
Unlike other entries in this portfolio, the result of this was a recommendation for terminating the project, and the scheduled termination of the project at the beginning of 2022.
What it is / Solution description
The product in question was the result of research done at Penn State’s educational psychology department. The tool was designed to provide push notifications to students’ mobile devices that encouraged provenly effective study habits. Instructors used the interface to schedule and push these notifications.
Who was using it
At time of study, approximately ten thousand students had been in classes using the application, taught by a few dozen key instructors. Classes and course materials varied significantly. The majority of instructors using the app were part of the original test group (starting 3 years prior to our work), with a few newly recruited instructors just having started one semester prior.
Process of Discovery
Our process of discovery was textbook. Our team is often more creative and open to rapid ideation in design research. However, we quickly became suspicious of shortcomings and failures of product in question. To keep ourselves accountable to the data and to the process, we followed a classic UX research approach.
Design Researcher UX Sweep and Design Recommendations
Stage 1 was a simple UX Sweep. The design team and researchers did used the product and explored it thoroughly. We took notes of design issues and UX sticking points that may be affecting usage on the instructor or student side. The team identified significant design issues that would require a full redesign of the product, including lack of core functions such as the need to manually re-enter data from semester to semester.
Functionality / Stress Testing
Stage 2 was a more technical stress testing of the app to identify existing technical issues. We identified significant mobile app issues that led to errors and crashes.
Designer / Developer Interviews
In stage 3, we spoke with the designers and developers of the app. This included the researcher team that originated the ideas, the technical development and support team, and the research team which maintained usage.
In this stage we sought to identify creator-led redesign needs or ideas. This group of stakeholders interacted with the product on a production level and, through that, had insight into critical functionality needs.
User Interviews
We began identifying critical issues within our stage 4 user interviews. Speaking with faculty members using the application, we learned that a number of severe issues were already hampering use and would make scaling impossible.
Most critical of these issues was the vast disparity between actual usage and the perceived usage reported by the development team. These usage disparities included a UX that discouraged retention to the point that the research team had to act as intermediaries between the user and the system, push notifications being turned off so regularly that teachers sent an email with every notification to tell students to check their notifications, and severe data flow chokepoints that left designers, developers, and users without an understanding of what the other was doing.
Quantitative Use Analysis
In stage 5, we began examining the actual usage data generated by the system. This confirmed our suspicions that actual usage followed a significantly different pattern than the one assumed by the design and development team. Most notably, we discovered that, of the dozens of instructors using the application, 5 accounted for 80% of use, and 2 of those were on the original design team.
User Interviews
Following data analysis, we brought our findings to design and development teams to get their perspectives on current findings. In this stage, we gave the design and development teams the opportunity to provide open feedback on our findings, which were presented without bias and without recommendation. Our goal was to understand how the original team thought they might address these issues or what steps may have already been taken to do so. Unfortunately, it became apparent that the issues had been completely unknown, and that the team did not have answers for the burning question of “how can we scale this”.
Reporting
Given the evidence found in our design research process, my team recommended the termination of the product. We did so by presenting the evidence uncovered, the lack of efficacy evidence, and the increasing cost of maintaining the project. We also recommended switching to more affordable commercially available options with similar functionality, far better design, and dedicated teams to maintain the product.
Why This Solution
Though no one takes pleasure in recommending the termination of a product, doing so is sometimes necessary in a major organization. The sunk cost of killing a project, even one that the team cares for, is nothing compared to the money sink a product can become when there is no plan in place to recoup losses or address serious design issues.
It is not common to include these types of projects in a portfolio, but, in my opinion, a designer researcher’s job is to give unbiased opinions on the past, present, and future of a product. When the product cannot justify itself, the design researcher’s job is to present the hard evidence that makes the decision a pragmatic one.