05 Oct 12:15 — 12:45
About the session
Building beloved user applications is a challenging yet rewarding pursuit for us working in technology today. While real user monitoring, or RUM, metrics are added early for external-facing applications, it is often added as an afterthought in the building of applications when building applications for users within organisations. Instead, we rely on anecdotal discussions and review feedback that, for many reasons, can leave us with an incomplete or accurate picture of the adoption of the software we build.
In this talk, I will use my experience in building applications in investment banking to discuss the reasons why obtaining long-term feedback on feature adoption can be difficult to validate.
We will also outline how real user monitoring and performance capabilities in tools such as Elastic User Experience or other RUM collectors can help you quantify user experience satisfaction and adoption to ensure we are providing effective experiences for users.
- Debunking the apprehension that tracking user behaviour using monitoring can be an intrusive experience
- Presentation of the reasons why it can be difficult to validate qualitative feedback on features, and how it can be difficult to elicit in these institutions outside of review sessions
- An overview of RUM metrics that can be captured to help track usage, potentially as KPIs, and how they can be collected
- Presentation of sample dashboards that can give insights into these metrics, and how they can help identify issues to be remediated or potential feature adoption issues in internal-client focused applications
- Discussion of potential limitations to RUM to be aware of when examining user adoption of applications and new features.
Themes: Software Delivery, Design, Metrics, Culture, Usability, Product, Visualisation