Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Agenda Item

Lead

Minutes

  1. Connection, Agenda/Objectives (5 min);

  2. Current, Outstanding, Backlog (20 min);

    1. Timing for bi-weekly meetings;

    2. Feedback on Clinical Shareable Pathways presentation;

    3. Splinter Group, Field Guide Updates;

    4. Lessons Learned as we proceed, consumers, challenges, etc.

    5. Prioritization: i. Most important; ii. Low hanging fruit; iii. Conditional (if time, what?);

  3. Clinical Shareable Pathways

    1. Feedback, immediate feedback - survey, focus group, track number of views

    2. Process for future feedback

    3. Other, future topics?

  4. Patterns and examples (covid ed, hypertension)

    1. Feedback system

    2. Rating system

    3. What do we need to set it up

  5. Pattern Activity Update; Field Guide Update

  6. Next OMG Meeting, what to prep? Upcoming milestones?

  7. New/Upcoming Topics; (5 min);

    1. Identify;

  8. Actions and Next Steps; (5 min);

    1. Review;

1.

2. Timing/day? PROPOSED: move 1-2:30 Eastern every other Monday (Cary to work with Tracie and Linda)

3. Clinical Shareable Pathways feedback; opportunity to capture feedback earlier post the session; gallery view/review as possible way to handle feedback (off-line or on-line feedback on Pathways, 3-4 examples; ); create on Confluence a place to store those 3-4 examples to provide feedback; functioning model, how to capture that? do both through Confluence and Virtual Authoring Tool. Set up in confluence, connect to example authoring tools to record and review and capture the comments and feedback; Tracie/Cary w Shawn on how to best proceed. Also, how to handle the feedback, deeper issues, comments, etc. Identify which create more action or discussion.

4. Understand the data side; present the data models; Steve White (SDMN); need to agree on patterns and approaches; need to develop the problem statement - invite Steve White to provide presentation on the data models; ***Please note: any presentations requested, proposed topics, please identify during, or between meetings;

5. Patterns and examples (covid ed, hypertension);

all reviewable;

6. Factual Characteristic; scope/nature/intent: data on the models, descriptive model of how org and people in org behave today….different than full computation vs partial computation; people have misunderstood examples in the past; risk is mis-matching model grades (more info below)

7. Pattern Activity Update; Field Guide Update

Field Guide on Pause, it is large, may be altered, Rob, Steve and John worked on roles, etc….passed to Chair group; standing by for next actions; Pattern Activity, existing ones from appendix; data dictionary as essential part for handling data, different representation, situational data models;

Pattern Work extracted from Field Guide to be live set of materials in between confluence and models (bpm+), tooling, simulations, etc; solves the field guide growth issue; second: two levels of patterns, macro (complex usages - current in field guide) that break into BPM+ health patterns, smaller; next tier would be generic BPM+ patterns which are not related to healthcare; created initial list of BPM+ patterns; data dictionary and scorecard…data dictionary is SDMN inspired methodology;

If you are waiting for call, pls stand by, we are having tech issuespublish, review, rate….need to set up confluence to enable this process step; could be reviewed in dedicated Thursday meeting or next meeting for this group;

Those patterns are work in progress, not published, official feedback forthcoming; prelim confluence page, rationale, etc….consolidate pattern and publish; will connect/reach to this group for feedback; John/Davide to work this and put out for comment to this group;

8. Next OMG, Dec 7: (March and Sept 2021 next sessions), Dec 7 may include a Jump Start session; so next session will be in March

Rating system with the content? Priority status and scope; (building off last session)

  1. May have brilliant but inaccessible to avg user; standardized and modular addresses this

    1. Rec: star rating system, 5-7 dimensions, quality measures, readability, ease of understanding, ability to implement, conformity to patterns,

    2. Rec: maturity, in the same rating system; impact and ability to implement (may be in both), small number of dimensions that can be reasonably defined

    3. Scope very important: want to avoid “need more freedom than models allow” (when actually applied to models designed for complete simulation)

    4. Offline: identify the 5 different dimensions, what are the selling points, what are needs of customer to properly address?

    5. Potential issue: multiple versions of same model, how do we do a “consumer report” type approach, this is the best approach, etc?

      1. Rating system open to the community, owners may decide to open an issue, issue resolved and create new version….should there be a way to notify people who have commented? (don’t want to mix comments across different versions)

      2. are comments named? attributable, follow up, subscriptions to models and versions to models…via confluence set up (Shawn question); figure the best way to set up in confluence; 2 levels; casual user feedback vs more involved “open an issue” (may have no action)

  2. System or visualization to rate what type of model or pathway, to help users

  3. Two axis visualization, x= accessibility vs y= scope

  4. how to funnel feedback to modify, how to say if it is mature or accessible

  5. question - how many models will be identified? this may effect need for rating system