Sea Ice Modeling Workshop | Agenda

- | NCAR Center Green Campus, Boulder, Colorado

DAY 1 / TUESDAY, FEBRUARY 2, 2016

SEA STATE MODELING/FORECASTING WORKSHOP

0800 - 0830     Workshop Check-in / Coffee

0830 - 0840     Workshop Motivation  (5-10 min w/QA)

  • Workshop motivation, goals, anticipated outcomes - Scott Harper (ONR)

0840 - 0910     Sea State Cruise Overview (30 min: 10 min + 5 min QA, ea)

           
0910 - 1010     Sea State Modeling & Forecasting(60 min: 10 min + 10 min QA, ea)

Each speaker to describe the forecast model, system components, coupling, initialization fields, boundary conditions, data assimilation; bias corrections; validation strategy & examples; process study foci & analysis plans; hindcasts; limits of predictability; testing & model adjustments planned for next freeze-up season

1005 - 1025     Break
           
1025 - 1055     Perspectives on Assessing Skill & Metrics(30 min: 5-10 min w/QA, ea)

1055 - 1100     Setting up for Break-Outs

1100 - 1200     Break-Outs - Part 1 (60 min)
Four rotating groups to successively build-out input for each topic listed below. Each group rotates every 30 mins (2 BOG rotations before lunch & 2 after lunch).  Assigned moderator & rapporteur stay with each topic to capture consistent notes.

BOG1:  Observations for Validating/Evaluating/Improving Model Performance
[Lead – Jim Thomson; Note-taker - Janet Intrieri] Room 2503

  • Validating the model fields using SeaState observations
    • List the observation datasets & fields to validate against
    • Is there value in developing any master obs files (with uncertainties)?
    • Which obs are in the GTS; Are the errors in the observations documented?
    • Should we run a parallel validation exercise, using the same initial  conditions, to compare large scale dynamics & state parameters? Do we then progress down to the differences in the boundary layer to address model differences & process issues?
    • How can we best assess boundary condition differences with obs?
    • Is there value in focusing on a case study to assess effect of “incorporating” waves & their impacts?
    • Should we determine which initialization fields are optimal & run a coordinated model intercomparison to quantify?
  • Evaluating the models using observations from SeaState & beyond
    • What comparisons should we make between the observations & models to evaluate performance?
    • Is a single season of model validation meaningful, if not, how should we extend the record, how long should the hindcasts be run for?
    • What additional obs should we use?
    • What reanalyses are “best” for our purposes?
  • Improving model performance using SeaState observations
    • What parameterizations can we validate against observations?
    • What initial observations can we use to affect model changes, & rerun to test modifications against?
    • What obs would fill the largest gaps in our understanding of processes aid in model representation?
    • What 2016 field campaigns are planned  can we obtain any needed observations?
    • Is there a YOPP or PPP aspect we should be positioning for in 2017-2018?

BOG 2: Understanding Key Processes
[Lead - Amy Solomon; Note-taker – Pam Posey] Room 2603

  • What have we learned from observations taken during SeaState that inform process understanding during freeze-up?
    • What processes are most important at various timescales?
    • What role do ocean waves play in ice break-up?
    • What coupled processes were observed at the ice edge?
    • How does ocean stratification impact freeze-up?
  • Challenges to verifying key processes in forecasts with in-situ data
    • Using point measurements to verify coarsely-resolved models
  • What have we learned from model forecasts of the SeaState period that inform process understanding during freeze-up?
    • What processes must be explicitly represented, versus which can be parameterized fairly accurately?
    • What processes influence predictability & predictive skill at various timescales?
    • What are the sources of large model errors? Which of these errors need to be addressed first to improve forecasts?
  • What suite of process-oriented diagnostics can best constrain simulated ocean-ice-atmosphere-cloud feedbacks in the marginal ice zone?
  • Are there specific case studies for a model intercomparison study?
    • What is the best strategy for separating errors due to model physics versus initialization & boundary conditions?

BOG 3: Intercomparisons/Metrics  
[Lead - Rick Allard; Note-taker – Chris Cox] Room 2607

  • What are our goals for intercomparison or coupled-process improvement?
  • How can we best evaluate skill (ice concentration, drift, ice edge, fluxes)?
  • What metrics are standard & which are best to evaluate the models?
  • Would it be useful to run models with a hierarchy of complexity (for example, fixed ocean/mixed-layer ocean/dynamic ocean) to evaluate sources of skill?

BOG 4: Model Improvement Plans
[Lead – Annarita Mariotti ; Note-taker – Mimi Hughes] Room 1214 (Main Room)

  • What model improvements are being planned?
  • Are ensemble runs being considered? How would the metrics be addressed?
  • Should there be an intercomparison exercise after adjusted models are completed?
  • What model products are most useful?
  • Are there new forecast products that should be developed?
  • Discuss next field season opportunities for observations & forecast validation
  • Should we propose an Arctic testbed exercise for forecast comparisons, observations, validation?

1200 - 1245     Working Lunch Discussion (45 min / provided, on-site)

1245 - 1345     Break-Outs - Part 2 (60 min)

1345 - 1405     Break to gather thoughts for report-outs (20 min)

1405 - 1535     Break-Out Reports (90 min: 10 min + 10 min QA, each BOG)

  • Report-outs from the 4 BOG Leads
  • Develop comprehensive list of input from all groups on model intercomparison strategies & skill metrics; identified process studies; analysis priorities; improvement & testing runs, new development; etc.  

           
1535 - 1545     Break

1545 -1645      Outline Next Steps for Forecast Comparisons, Process Understanding, & Model Improvement Plans (60 min)

  • Prioritize tasks from compiled BOG list
  • Outline next action steps, POC’s, timeline, deliverables, etc.
  • Discuss next field season opportunities & possible Arctic testbed exercise
  • Discuss missing pieces, gaps, coordination & activities that need funding
  • Assign presenter for next day’s summary presentation to NGGPS
  • Determine workshop community output piece & ongoing communication plan

1645                Workshop Adjourns
1800                Group dinner (Location TBD)

DAY 2 / WEDNESDAY, FEBRUARY 3, 2016

0830 - 0900     Workshop Welcome  Janet Intrieri (NOAA ESRL)

0900 - 1000     Perspectives on Community Sea Ice Model Needs & Criteria
Rick Allard (NRL)

  • 0935 - 0950 An overview of envisioned prediction products out of the NGGPS in view of their application -e.g., extent? thickness? lead time? uncertainty quantification? resolution etc.
  • 0950 - 1000 Perspectives on verification/criteria for NGGPS sea ice model selection

1000 - 1020     Break

1020 - 1230     Candidate Model Round-Up
Cecilia Bitz (UW)

  • Current Wx-Scale to Seasonal Scale Sea-ice prediction systems (10 min)
    • Prediction intercomparison example - Ed Blanchard-Wrigglesworth (UW)
  • Presentations of candidate models (10 min each)

[Presentations should include overview info, initializations, processes, readiness/maturity, feasibility of community model configuration and NEMS compatibility, code management philosophy, processes represented, initialization, boundary conditions, outputs, applicability limits, future envisioned development path/support, computational costs, code/documentation availability, skill metrics/criteria/experiments/data used for evaluation for each model; behaviour as part of coupled models, etc.]

  • Discussion of common model threads, capabilities, products, feasibility, etc. based on the presentations

1230 - 1330  Working Lunch Discussion (60 mins / provided, on-site)

1330 - 1530     Model Selection Criteria/Skill/Testing - Break-Outs
                        Janet Intrieri (NOAA ESRL)
Four rotating groups to successively build-out input for each topic below. Each group rotates every 30 mins.  Assigned moderator stays with each topic to capture consistent notes.

BOG 1 Room 1214:  Develop criteria for the sea ice model selection that consider the unified model applications  and community modeling support
Lead: Bob Grumbine; Note-taker: Becki Heim

BOG 2 Room 2503: Determine skill metrics for testing candidate models
Lead: Avichal Mehra, NOAA/NWS/EMC; Note-taker: Mitch Bushuk

BOG 3 Room 2603: Provide input on model testing methodology & goals, mechanism for reviewing results, and delivery of recommendation
            Lead: Rick Allard, NRL; Note-taker: Frederick DuPont

BOG 4 Room 2607: Model development path/community engagement
                                    Lead: Marika Holland; Note-taker:Adrian Turner

            1530 - 1600     Break to gather thoughts for report-outs

1600 - 1645     Break-Out Group Reports & Plenary Discussion (45 min: 10 min ea) Annarita Mariotti (NOAA CPO)

  • Report-outs from the 4 BOG leads
  • Develop comprehensive list of input from all groups; summarize

1645                Workshop Adjourns

 

DAY 3 / THURSDAY, FEBRUARY 4, 2016-NGGPS WORKSHOP

0830 - 0945     Other Key Considerations(75 min - 5-10 min plus QA)
Marika Holland (NCAR)

0945 - 1010     Break

1010 - 1200     Outline Next Steps - Ligia Bernardet (NOAA ESRL)

  • Summarize NGGPS deliverables, timeline, etc.
  • Discuss coordination opportunities and needs
  • Develop specific comparisons/testing projects and participants
  • Capture gaps and desired evolution pathway over next few years to meet needs
  • Discuss/finalize workshop recommendations/output

1200                Workshop Adjourn