Tag Archives: feedback

HOW WAS THE USER FEEDBACK COLLECTED DURING THE DEVELOPMENT PROCESS

Collecting user feedback was an integral part of our development process. We wanted to ensure that what we were building was actually useful, usable and addressed real user needs. Getting input and feedback from potential users at various stages of development helped us continually improve the product and build something people truly wanted.

In the early concept phase, before we started any design or development work, we conducted exploratory user interviews and focus groups. We spoke to over 50 potential users from our target demographic to understand their current workflow and pain points. We asked open-ended questions to learn what aspects of their process caused the most frustration and where they saw opportunities for improvement. These qualitative interviews revealed several core needs that we felt our product could address.

After analyzing the data from these formational sessions, we created paper prototypes of potential user flows and interfaces. We then conducted usability testing with these prototypes, having 10 additional users try to complete sample tasks while thinking out loud. As they used the prototypes, we took notes on where they got stuck, what confused them, and what they liked. Their feedback helped validate whether we had identified the right problems to solve and pointed out ways our initial designs could be more intuitive.

With learnings from prototype testing incorporated, we moved into high-fidelity interactive wireframing of core features and workflows. We created clickable InVision prototypes that mimicked real functionality. These digital prototypes allowed for more realistic user testing. Another 20 participants were recruited to interact with the prototype as if it were a real product. We observed them and took detailed notes on frustrations, confusions, suggestions and other feedback. participants also filled out post-task questionnaires rating ease of use and desirability of different features.

The insights from wireframe testing helped surface UX issues early and guided our UI/UX design and development efforts. Key feedback involved structural changes to workflows, simplifying language, and improvements to navigation and information architecture. All issues and suggestions were tracked in a feedback tracker to ensure they were addressed before subsequent rounds of testing.

Once we had an initial functional version, beta testing began. We invited 50 external users who pre-registered interest to access an unlisted beta site and provide feedback over 6 weeks. During this period, we conducted weekly video calls where 2-4 beta testers demonstrated use of the product and sharedcandid thoughts. We took detailed notes during these sessions to capture specific observations, pain points, issues and suggestions for improvement. Beta testers were also given feedback surveys after 1 week and 6 weeks of use to collect quantitative ratings and qualitative comments on different aspects of the experience over time.

Through use of the functional beta product and discussions with these dedicated testers, we gained valuable insights into real-world usage that high-fidelity prototypes could not provide. Feedback centered around performance optimizations, usability improvements, desired additional features and overall satisfaction. All beta tester input was triaged and prioritized to implement critical fixes and enhancements before public launch.

Once the beta period concluded and prioritized changes were implemented, one final round of internal user testing was done. 10 non-technical users explored the updated product and flows without guidance and provided open feedback. This ensured a user experience coherent enough for new users to intuitively understand without support.

With user testing integrated throughout our development process, from paper prototyping to beta testing, we were able to build a product rooted in addressing real user needs uncovered through research. The feedback shaped important design decisions and informed key enhancements at each stage. Launching with feedback from over 200 participants helped ensure a cohesive experience that was intuitive, useful and enjoyable for end users. The iterative process of obtaining input and using it to continually improve helped make user-centered design fundamental to our development methodology.

HOW WILL THE QUALITATIVE FEEDBACK FROM SURVEYS FOCUS GROUPS AND INTERVIEWS BE ANALYZED USING NVIVO

NVivo is a qualitative data analysis software developed by QSR International to help users organize, analyze, and find insights in unstructured qualitative data like interviews, focus groups, surveys, articles, social media and web content. Some of the key ways it can help analyze feedback from different qualitative sources are:

Organizing the data: The first step in analyzing qualitative feedback is organizing the different data sources in NVivo. Surveys can be imported directly from tools like SurveyMonkey or Google Forms. Interview/focus group transcriptions, notes and audio recordings can also be imported. This allows collating all the feedback in one place to start coding and analyzing.

Attribute coding: Attributes like participant demographics (age, gender etc.), location, question number can be coded against each respondent to facilitate analysis based on these attributes. This helps subgroup and compare feedback based on attributes when analyzing themes.

Open coding: Open or emergent coding involves reading through the data and assigning codes/labels to text, assigning descriptive names to capture meaning and patterns. This allows identifying preliminary themes and topics emerging from feedback directly from words and phrases used.

Coding queries: As more data is open coded, queries can be run to find all responses related to certain themes, keywords, codes etc. This makes it easy to quickly collate feedback linked to particular topics without manually scrolling through everything. Queries are extremely useful for analysis.

Axial coding: This involves grouping open codes together to form higher level categories and hierarchies. Similar codes referring to same/linked topics are grouped under overarching themes. This brings structure and organization to analysis by grouping related topics together at different abstraction levels.

Case coding: Specific cases or respondents that provide particularly insightful perspective can be marked or coded for closer examination. Case nodes help flag meaningful exemplars in the data for deeper contextual understanding during analysis.

Concept mapping: NVivo allows developing visual concept maps that help see interconnections between emergent themes, sub-themes and categories in a graphical non-linear format. These provide a “big picture” conceptual view of relationships between different aspects under examination.

Coding comparison: Coding comparison helps evaluate consistency of coding between different researchers/coders by comparing amount of agreement. This ensures reliability and rigor in analyzing qualitative data by multiple people.

Coded query reports: Detailed reports can be generated based on different types of queries run. These reports allow closer examination of themes, cross-tabulation between codes/attributes, comparison between cases and sources etc. Reports facilitate analysis of segments from different angles.

Modeling and longitudinal analysis: Relationships between codes and themes emerging over time can be modeled using NVivo. Feedback collected at multiple points can be evaluated longitudinally to understand evolution and changes in perspectives.

With NVivo, all sources – transcripts, notes, surveys, images etc. containing qualitative feedback data are stored, coded and linked to an underlying query-able database structure that allows users to leverage the above and many other tools to thoroughly examine emergent patterns, make connections between concepts and generate insights. The software allows methodically organizing unstructured text based data, systematically coding text segments, visualizing relationships and gleaning deep understanding to inform evidence-based decisions. For any organization collecting rich qualitative inputs regularly from stakeholders, NVivo provides a very powerful centralized platform for systematically analyzing suchfeedback.

NVivo is an invaluable tool for analysts and researchers to rigorously analyze and gain valuable intelligence from large volumes of qualitative data sources like surveys, interviews and focus groups. It facilitates a structured, transparent and query-able approach to coding emergent themes, comparing perspectives, relating concepts and ultimately extracting strategic implications and recommendations backed by evidence from verbatim customer/user voices. The software streamlines what would otherwise be an unwieldy manual process, improving efficiency and credibility of insights drawn.

HOW WILL THE FEEDBACK FROM CLINICAL EXPERTS AND PATIENTS BE COLLECTED AND ANALYZED

Collecting meaningful and useful feedback from clinical experts and patients is crucial for the development of new medical treatments and technologies. A robust feedback process allows researchers and developers to gain valuable insights that can help improve outcomes for patients. Some key aspects of how feedback could be collected and analyzed at various stages of the development process include:

During early research and development stages, focus groups and design thinking workshops with clinicians and patients can help inform what needs exist and how new solutions may help address unmet needs. Audio recordings of these sessions would be transcribed to capture all feedback and ideas. Transcripts would then be analyzed for themes, pain points, and common insights using qualitative data analysis software. This early feedback is formative and helps shape the direction of the project.

Once prototypes are developed, usability testing sessions with clinicians and patients would provide feedback on early user experiences. These sessions would be video recorded with participants’ consent to capture interactions with the prototypes. Recordings would then be reviewed and analyzed to identify any usability issues, things participants struggled with, aspects they found intuitive, and overall impressions. Researchers may use qualitative coding techniques to systematically analyze the recordings for reoccurring themes. Feedback from these sessions helps make refinements and improvements to prototypes before larger pilot studies.

When pilot studies involve real-world use of new technologies or treatments, multiple methods are useful for collecting comprehensive feedback. Clinicians and patients in pilot studies could be asked to complete online questionnaires about their experiences at various time points such as initial use, one week follow up, one month follow up, and study completion. Questions would address impact on clinical workflows, ease of use, patient experience and outcomes, and overall impressions. Questionnaires would be designed using best practices for question wording and response scales to produce high quality quantitative data.

In addition to questionnaires, pilot study participants could optionally participate in 30-60 minute interviews or focus groups. A semi-structured interview guide would be used consistently across all interviews and groups to allow for systematic comparative analysis while still permitting open discussion of experiences. Interviews and groups would be audio recorded with consent for transcription and analysis. Recordings may be transcribed using speech recognition software and transcriptions would then be coded and analyzed thematically. Quantitative questionnaire data and qualitative interview/group data combined provide a comprehensive picture of real-world experiences.

To analyze feedback at scale from large pilot studies or post-market surveillance, Natural Language Processing (NLP) techniques may be applied to unstructured text data like questionnaires comments, transcripts, clinical notes, and patient/clinician written reviews. NLP involves using machine learning algorithms to extract semantic meaning from vast amounts of free-form text. It allows for sentiment analysis to understand if feedback is positive or negative, and also topic modeling to surface common themes or concerns that emerge from the data. Combined with techniques like statistical analysis of Likert scale responses, this approach analyzes both qualitative and quantitative feedback at a large scale with a level of rigor not possible through manual coding alone.

All analyzed feedback would be systematically tracked in a searchable database along with key details about when and from whom the feedback was received. Clinicians, researchers and product developers would have access to review feedback themes, Sentiments, and identified issues/enhancements. Regular reports on gathered feedback would also help inform strategic product roadmaps and planning for future research studies. The database allows feedback to have a visible impact and influence on the continuous improvement of solutions over time based on real-world input from intended end users.

Collecting feedback from multiple qualitative and quantitative sources at various stages of development, coupled with robust analytic techniques helps uncover valuable insights that can strengthen new medical solutions to better serve clinicians and improve patient outcomes. A systematic, multifaceted approach to feedback collection and analysis ensures a continuous learning process throughout the lifecycle of developing technologies and treatments.