Tag Archives: users

HOW DID YOU GATHER FEEDBACK FROM USERS AFTER THE INITIAL LAUNCH

Gathering user feedback is crucial after the initial launch of any new software, product, or service. It allows companies to understand how real people are actually using and experiencing their offering, identify issues or opportunities for improvement, and make informed decisions on what to prioritize for future development.

For our initial launch, we had a multi-pronged approach to feedback collection that involved both quantitative and qualitative methods. On the quantitative side, we implemented tracking of key metrics within the product itself such as active user counts, time spent on different features, error/crash rates, completion of onboarding flows, and conversion rates for core tasks. This data was automatically collected in our analytics platform and provided insights into what parts of the experience were working well and where users may be dropping off.

We also implemented optional in-product surveys that would pop up after significant user milestones like completing onboarding, making a purchase, or using a new feature for the first time. These surveys asked users to rate their satisfaction on various aspects of the experience on a 1-5 star scale as well as leaving open comments. Automatic trigger-based surveys allowed us to collect statistically meaningful sample sizes of feedback on specific parts of the experience.

In addition to in-product feedback mechanisms, we initiated several email campaigns targeting both active users as well as people who had started but not completed the onboarding process. These emails simply asked users to fill out an online survey sharing their thoughts on the product in more depth. We saw response rates of around 15-20% for these surveys which provided a valuable source of qualitative feedback.

To gather perspectives from customers who did not complete the onboarding process or become active users, we also conducted interviews with 10 individuals who had started but not finished signing up. These interviews dug into the specific reasons for drop-off and pain points encountered during onboarding. Insights from these interviews were especially helpful for identifying major flaws to prioritize fixing in early updates.

For active customers, we hosted two virtual focus groups with 5 participants each to get an even deeper qualitative understanding of how they used different features and what aspect of the experience could be improved. Focus groups allowed participants to build off each other’s responses in a dynamic discussion format which uncovered nuanced feedback.

In addition to directly surveying and interviewing users ourselves, we closely monitored forums both on our website as well as general discussion sites online for unprompted feedback. Searching for mentions of our product and service on sites like Reddit and Twitter provided a window into conversations we were not directly a part of. We also had a dedicated email for user support tickets that generated a wealth of feedback as customers reached out about issues or requested new features.

Throughout the process, all feedback received both quantitative and qualitative was systematically logged, tagged, and prioritized by our product and design teams. The in-product usage metrics were the biggest driver of prioritization, but qualitative feedback helped validate hypotheses and shed new light on problems detected in analytics. After distilling learnings from all sources into actionable insights, we then made several iterative updates within the first 3 months post-launch focused on improving core tasks, simplifying onboarding flows, and addressing common pain points.

Following these initial rounds of updates, we repeated the full feedback collection process to gauge how well changes addressed issues and to continue evolving the product based on a continuous feedback loop. User research became embedded in our core product development cycle, and we now have dedicated staff focused on ongoing feedback mechanisms and usability testing for all new features and experiments. While collecting feedback requires dedicated resources, it has proven invaluable for understanding user needs, identifying problems, building trust with customers, and delivering the best possible experience as our service continues to evolve.

HOW CAN THE DATABASE APPLICATION BE DEPLOYED TO END USERS FOR FEEDBACK AND ENHANCEMENTS

The first step in deploying the database application to end users is to ensure it is in a stable and complete state to be tested by others. All functionality should be implemented, bugs should be minimized, and performance should be adequate. It’s a good idea to do internal testing by other teams within the organization before exposing the application externally. This helps catch any major issues prior to sharing with end users.

Once internal testing is complete, the application needs to be prepared for external deployment. The deployment package should contain everything needed to install and run the application. This would include executables, configuration files, database scripts to set up the schema and seed data, documentation, and a readme file explaining how to get started. The deployment package is typically distributed as a downloadable file or files that can be run on the target system.

The next step is to determine the deployment strategy. Will it be a closed or controlled beta with a small number of selected users, or an open public beta? A controlled beta allows issues to be identified and fixed in a limited setting before widespread release, while an open beta garners broader feedback. The deployment strategy needs to be chosen based on the complexity of the application, goals of the beta period, and risk tolerance.

With the deployment package and strategy determined, it’s time to engage with users to participate in the beta. For a controlled beta, relevant people within the target user community should be directly contacted to request their participation. An open call for participation can also be used. When recruiting beta testers, it’s important to be clear about the purpose being feedback and testing rather than fully rolled-out production usage. Testers need to understand and accept that bugs may be encountered.

Each beta tester is provided with access to install and run the application from the deployment package. During onboarding, testers should be given documentation on application features and workflows, as well as guidelines on providing feedback. It’s useful to have testers sign a non-disclosure agreement and terms of use if it’s a controlled beta of an unreleased application.

With the application deployed, the feedback period begins. Testers use the application for its intended purposes, exploring features and attempting different tasks. They document any issues experienced, such as bugs, usability problems, missing features, or requests for enhancements. Feedback should be collected periodically through online questionnaires, interviews, support tickets, or other predefined mechanisms.

Throughout the beta, the development team monitors incoming feedback and works to address high priority problems. Fixes are deployed to testers as new versions of the application package. This continual feedback-implement-test cycle allows improvements to be made based on real-world usage experiences. As major issues are resolved, more testers may be onboarded to further stress test the application.

Once the feedback period ends, all input from testers is analyzed to finalize any outstanding work. Common feedback themes may indicate deeper problems or opportunities for enhancements. User experience metrics like task success rates and task completion times provide quantitative insights. The development team reviews all data to decide if the application is ready for general release, or if another beta cycle is needed.

When ultimately ready for launch, the final deployment package is published through appropriate channels for the intended user base. For example, a consumer-facing app would be released to Android and iOS app stores, while an enterprise product may be deployed through internal tools and support portals. Comprehensive documentation including setup guides, tutorials and product handbooks support the production rollout.

Deploying a database application to end users for testing and improvement is a structured process. It requires technical, process and communications work to carefully manage a productive feedback period, continually refine the product based on experiences, and validate readiness for production usage. The feedback obtained directly from target users is invaluable for creating a high quality application that genuinely meets real-world needs.

HOW CAN THE SUBJECT MATTER EXPERT ENSURE THAT THE PROJECT MEETS THE NEEDS OF END USERS

The subject matter expert (SME) plays a vital role in ensuring a project successfully delivers value to end users. As the person with in-depth knowledge about the domain and stakeholder needs, the SME has unique insights that can guide project requirements, design, development, and implementation.

Early and continuous end user engagement is key. The SME should facilitate conducting user research at the outset to uncover user pain points, desires, and existing mental models. Methods like interviews, surveys, focus groups, job shadowing, and usability testing provide diverse perspectives.Personas and user stories translate research findings into actionable requirements.

As the voice of the user, the SME should participate in requirements definition and validation. They can help the project team interpret research and prioritize based on user importance and feasibility. The resulting requirements specification reflects user needs and enables traceability. The SME also reviews and approves deliverables to confirm alignment.

The SME advises on user experience (UX) and interface design to ensure solutions are easy to learn, efficient to use, and error-proof. They advocate for intuitive interaction paradigms, meaningful and unambiguous terminology, and responsive support for varied users, tasks and contexts of use. Usability testing involving users supports iterative improvement.

For complex domains, the SME helps break down requirements into manageable features and provides subject matter training. They act as a liaison between implementation teams and users to clarify assumptions and address obstacles early. As new needs emerge, the SME captures changes through revisions to requirements and guides changes.

During deployment and transition to support, the SME coaches end users, documents processes, and identifies areas for supplementary guidance materials like job aids, quick references and help functions. They solicit feedback to continuously enhance adoption, success and satisfaction. The post-implementation support period is crucial for benefits realization.

As an objective observer, the SME monitors real-world usage and performance to verify that solutions are working as intended and delivering expected outcomes. They compile metrics on things like completion rates, error frequencies and task durations to highlight what’s going well or requiring adjustment. Formal usability studies help justify refinements.

Change management is vital with users. The SME plays a lead role in communications, training, incentivization and addressing resistance to minimize disruptions. Their credibility and expertise reassure users of benefits while preparing them for transitions. A culture of open information exchange and responsiveness to issues fosters user buy-in, compliance and advocacy over the long term.

The SME participates in maintenance to incorporate lessons learned as well as handle changes in user profiles, technologies and business needs. They keep requirements and designs flexible enough to support future enhancements with minimal rework. Well-timed roadmap discussions balance necessary upgrades with avoiding “analysis paralysis”.

Throughout the project lifecycle and beyond, the SME establishes a collaborative relationship and keeps users front and center. Their dedication to understanding real user perspectives avoids assumptions and delivers outcomes grounded in reality. With proactive methods and continuous improvement mindset, the SME empowers users and maximizes project success, adoption and realization of strategic benefits. Effective guidance from the SME helps ensure user requirements are done right from the start.

A subject matter expert can ensure a project meets end user needs by thoroughly involving users upfront and throughout via research, requirement validation, UX design collaboration, training, deployment support, monitoring, change communication and maintenance involvement. Their in-depth domain understanding and priority on user perspectives is invaluable for delivering the right solutions that are well-received and create intended impacts. With the SME championing the user voice, projects achieve much greater chances of fulfillment and long-term satisfaction.

HOW DID THE UTA ACCESS APP ADDRESS THE SPECIFIC NEEDS OF VISUALLY IMPAIRED USERS

The Utah Transit Authority (UTA) recognized that their mobile ticketing and planning app needed to be fully accessible for users with visual impairments in order to provide equal access to public transportation. When developing the UTA Access app, they conducted extensive user research and usability testing with organizations for the blind to understand the unique challenges visually impaired commuters face.

A major priority was to make all content and functionality accessible without requiring sight. This started at the most basic level of app design. The UTA Access development team decided on a simple, clean interface without unnecessary graphics or images that would be meaningless for screen readers. They settled on a basic light color scheme with high color contrasts tested using accessibility evaluation tools.

All text was implemented using semantic HTML for optimal screen reader support. Font sizes, styles, and spacing were carefully designed to be nicely readable by text-to-speech software at different zoom levels. Navigation was kept straightforward using clearly labeled tabs and simple lists rather than multi-level drop downs that could get confusing.

Forms and inputs were optimized for accessibility. Labels were programmatically associated to describe each field appropriately. Text fields and buttons had large touch targets tested to work reliably with finger gestures. Select boxes were expanded to full lists to avoid confusing screen readers. Error states were announced verbally to inform users of validation issues.

Perhaps most importantly, the entire app was built to be operable without visual cues. All functionality and actions were available through standard iOS gestures detectable by VoiceOver like taps, swipes, and pinches rather than relying on visual interactions. Navigation, menus, maps, buttons all worked seamlessly by touch alone.

Detailed audio and haptic feedback was implemented at each step to guide non-visual use. Form entries announced content as fingers moved over text. Options in lists spoke when selected. Errors vocalized issues found. Map interactions utilized precision haptics to locate stops by feel. These cues provided an equivalent experience to what sighted users see visually.

Maps and trip planning posed unique challenges given their visual nature, so significant effort went into ensuring these key features still worked for the blind. Public transit routes and locations were exposed programmatically as text rather than images alone so screen readers could understand the map as a network. Zoom and pan functions had clickable text overlays to control the view without seeing. Pinch gestures triggered distance measurement between points read aloud.

Stops, stations, and transportation options on maps were all discoverable through clearly labeled text bubbles that popped up with proximity. Users could navigate these details through standard gestures without needing to interpret visual markers. Routes for trip planning auto-populated with full descriptions of each leg such as “Walk north on Main St for 3 blocks then board the Red Line train heading east.”

Fare payment was made as accessible as possible given financial transaction requirements. Cards could be purchased, loaded, and managed through logical, linear flows. Users entered data through expansive text entry rather than cryptic buttons. Card numbers and expiration dates were annunciated back for confirmation. Transaction status updated with voice descriptions of completion or issues.

The UTA Access app met and exceeded accessibility standards and guidelines by anticipating how visually impaired users truly experience mobile apps through non-visual means alone. It empowered this underserved community with completely independent multi-modal trip planning and fare management on par with sighted travelers through optimized design, feedback techniques, and innovation in making mapping functionality accessible without vision. User testing proved it successfully eliminated participation barriers and allowed equal transportation access for the blind.

Through rigorous user research, established best practices in accessible design and development, attention to technical details, and creative solutions, the UTA Access app addressed the transportation needs of visually impaired riders in a truly meaningful and equitable way. It serves as an excellent example for other public transit agencies on inclusively delivering essential mobility services through mobile platforms for all users regardless of ability.