Tag Archives: collected

HOW WAS THE USER FEEDBACK COLLECTED DURING THE DEVELOPMENT PROCESS

Collecting user feedback was an integral part of our development process. We wanted to ensure that what we were building was actually useful, usable and addressed real user needs. Getting input and feedback from potential users at various stages of development helped us continually improve the product and build something people truly wanted.

In the early concept phase, before we started any design or development work, we conducted exploratory user interviews and focus groups. We spoke to over 50 potential users from our target demographic to understand their current workflow and pain points. We asked open-ended questions to learn what aspects of their process caused the most frustration and where they saw opportunities for improvement. These qualitative interviews revealed several core needs that we felt our product could address.

After analyzing the data from these formational sessions, we created paper prototypes of potential user flows and interfaces. We then conducted usability testing with these prototypes, having 10 additional users try to complete sample tasks while thinking out loud. As they used the prototypes, we took notes on where they got stuck, what confused them, and what they liked. Their feedback helped validate whether we had identified the right problems to solve and pointed out ways our initial designs could be more intuitive.

With learnings from prototype testing incorporated, we moved into high-fidelity interactive wireframing of core features and workflows. We created clickable InVision prototypes that mimicked real functionality. These digital prototypes allowed for more realistic user testing. Another 20 participants were recruited to interact with the prototype as if it were a real product. We observed them and took detailed notes on frustrations, confusions, suggestions and other feedback. participants also filled out post-task questionnaires rating ease of use and desirability of different features.

The insights from wireframe testing helped surface UX issues early and guided our UI/UX design and development efforts. Key feedback involved structural changes to workflows, simplifying language, and improvements to navigation and information architecture. All issues and suggestions were tracked in a feedback tracker to ensure they were addressed before subsequent rounds of testing.

Once we had an initial functional version, beta testing began. We invited 50 external users who pre-registered interest to access an unlisted beta site and provide feedback over 6 weeks. During this period, we conducted weekly video calls where 2-4 beta testers demonstrated use of the product and sharedcandid thoughts. We took detailed notes during these sessions to capture specific observations, pain points, issues and suggestions for improvement. Beta testers were also given feedback surveys after 1 week and 6 weeks of use to collect quantitative ratings and qualitative comments on different aspects of the experience over time.

Through use of the functional beta product and discussions with these dedicated testers, we gained valuable insights into real-world usage that high-fidelity prototypes could not provide. Feedback centered around performance optimizations, usability improvements, desired additional features and overall satisfaction. All beta tester input was triaged and prioritized to implement critical fixes and enhancements before public launch.

Once the beta period concluded and prioritized changes were implemented, one final round of internal user testing was done. 10 non-technical users explored the updated product and flows without guidance and provided open feedback. This ensured a user experience coherent enough for new users to intuitively understand without support.

With user testing integrated throughout our development process, from paper prototyping to beta testing, we were able to build a product rooted in addressing real user needs uncovered through research. The feedback shaped important design decisions and informed key enhancements at each stage. Launching with feedback from over 200 participants helped ensure a cohesive experience that was intuitive, useful and enjoyable for end users. The iterative process of obtaining input and using it to continually improve helped make user-centered design fundamental to our development methodology.

HOW WILL THE FEEDBACK FROM CLINICAL EXPERTS AND PATIENTS BE COLLECTED AND ANALYZED

Collecting meaningful and useful feedback from clinical experts and patients is crucial for the development of new medical treatments and technologies. A robust feedback process allows researchers and developers to gain valuable insights that can help improve outcomes for patients. Some key aspects of how feedback could be collected and analyzed at various stages of the development process include:

During early research and development stages, focus groups and design thinking workshops with clinicians and patients can help inform what needs exist and how new solutions may help address unmet needs. Audio recordings of these sessions would be transcribed to capture all feedback and ideas. Transcripts would then be analyzed for themes, pain points, and common insights using qualitative data analysis software. This early feedback is formative and helps shape the direction of the project.

Once prototypes are developed, usability testing sessions with clinicians and patients would provide feedback on early user experiences. These sessions would be video recorded with participants’ consent to capture interactions with the prototypes. Recordings would then be reviewed and analyzed to identify any usability issues, things participants struggled with, aspects they found intuitive, and overall impressions. Researchers may use qualitative coding techniques to systematically analyze the recordings for reoccurring themes. Feedback from these sessions helps make refinements and improvements to prototypes before larger pilot studies.

When pilot studies involve real-world use of new technologies or treatments, multiple methods are useful for collecting comprehensive feedback. Clinicians and patients in pilot studies could be asked to complete online questionnaires about their experiences at various time points such as initial use, one week follow up, one month follow up, and study completion. Questions would address impact on clinical workflows, ease of use, patient experience and outcomes, and overall impressions. Questionnaires would be designed using best practices for question wording and response scales to produce high quality quantitative data.

In addition to questionnaires, pilot study participants could optionally participate in 30-60 minute interviews or focus groups. A semi-structured interview guide would be used consistently across all interviews and groups to allow for systematic comparative analysis while still permitting open discussion of experiences. Interviews and groups would be audio recorded with consent for transcription and analysis. Recordings may be transcribed using speech recognition software and transcriptions would then be coded and analyzed thematically. Quantitative questionnaire data and qualitative interview/group data combined provide a comprehensive picture of real-world experiences.

To analyze feedback at scale from large pilot studies or post-market surveillance, Natural Language Processing (NLP) techniques may be applied to unstructured text data like questionnaires comments, transcripts, clinical notes, and patient/clinician written reviews. NLP involves using machine learning algorithms to extract semantic meaning from vast amounts of free-form text. It allows for sentiment analysis to understand if feedback is positive or negative, and also topic modeling to surface common themes or concerns that emerge from the data. Combined with techniques like statistical analysis of Likert scale responses, this approach analyzes both qualitative and quantitative feedback at a large scale with a level of rigor not possible through manual coding alone.

All analyzed feedback would be systematically tracked in a searchable database along with key details about when and from whom the feedback was received. Clinicians, researchers and product developers would have access to review feedback themes, Sentiments, and identified issues/enhancements. Regular reports on gathered feedback would also help inform strategic product roadmaps and planning for future research studies. The database allows feedback to have a visible impact and influence on the continuous improvement of solutions over time based on real-world input from intended end users.

Collecting feedback from multiple qualitative and quantitative sources at various stages of development, coupled with robust analytic techniques helps uncover valuable insights that can strengthen new medical solutions to better serve clinicians and improve patient outcomes. A systematic, multifaceted approach to feedback collection and analysis ensures a continuous learning process throughout the lifecycle of developing technologies and treatments.