To help get our arms around the topic and allow contributors to focus on their area of interest and expertise, the uncertainty initiative is organized into four key challenge areas:
This challenge area will help answer the question “where do we begin?”. Most transportation planners and modelers won’t take much convincing that there is uncertainty in every aspect of a forecast. However, depending on what is to be evaluated and how, some uncertainties are more relevant than others.
Planners and modelers need guidance on how to home in on the relevant uncertainties based on the projects and policies under evaluation and how they will be measured. At the same time, we need to guard against an insufficient exploration of assumptions. Modelers will need strategies on how to push their imaginations in defining scenarios, including cases where policies and projects fail, to discover and explore the ‘worst-case’ scenario conditions. This could even include the exploration of ‘black swan’ scenarios to move thinking beyond the constraints of existing tools.
Uncertainties that can be modeled will be translated into ranges for analysis. At the least, these ranges are a set of bounds or cases to be tested. If the uncertainties are well-characterized, as opposed to deep, a probability distribution could also be defined to support risk assessment outputs (e.g. confidence intervals, probable vs. possible outcomes).
This challenge area will also consider how the probabilities and ranges can and should be updated as new information becomes available. The new information could be a planned major investment from one agency (e.g. a new bridge/tunnel, transit line, toll road, etc.) that would have a cascading effect on the plans of other agencies and land use. In effect, the practice could be a Bayesian approach to setting and refining the uncertainty distributions along with an exploratory approach.
Catalog of uncertainty variables and ranges tested by planning agencies
Uncertainties deemed relevant but not well handled by existing modeling tools
Guidance and references for defining uncertainty bounds and how these can be translated into model parameters
Description, guidance and examples of how updated information can be used to modify the uncertainty distributions
Guidance on how to evaluate suitability of a given model to test uncertainty (e.g. can I just vary capacities to represent automated vehicles?)
The historic focus on improving accuracy in our models as predictive tools has had implications on their ability to operate in an exploratory manner. Longer run times and a manual process to setup, run and analyze results make running several experiments daunting; let alone the challenge to run a relatively small exploratory sample of 50 to 100 experiments. This area will examine the practical obstacles to implementation (computational power, software compatibility, archive capacity, etc.).
TMIP-EMAT was built to facilitate using large travel demand models across an uncertainty space, but requires some investment to integrate and use effectively. VisionEval is a strategic tool platform that can be set up and run with lower overhead than a traditional travel demand model. These tools are used in various ways (independently, in concert, as part of a structured exploration and narrowing of scenarios) therefore a cohesive compilation and reference for new and existing users would be valuable.
Build and maintain a reference / user group of TMIP-EMAT users with information on the core model structure, uncertainty application, lessons learned, contact info
Survey and describe practices using VisionEval or other strategic tools in concert with more complex travel demand models
Catalog how tools by type are structured in application (e.g. complement of short term operational analysis and longer term strategic analysis)
TMIP-EMAT
Documentation: https://tmip-emat.github.io/
TMIP repo: https://github.com/tmip-emat/tmip-emat
Active Fork: https://github.com/camsys/emat/tree/cloud
After going through all the effort of defining uncertainty and conducting many model runs, we need new approaches to support analysis and call attention to key outcomes.
It is important to counter the temptation to generate an aggregate single value outcome thus: masking decisions around prioritization of impacts (that may vary by stakeholder groups); imposing an assessment on the risk of each impact; and asserting the relationship between impacts.
This is an area where equity concerns and considerations need to be emphasized. Within an equity population, groups may fare differently across the range of uncertainties. There is useful information in examining the shape of the distribution of outcomes, identifying the tipping points, and describing best and worst case scenarios.
Example visualizations and source code to present varying inputs and many outputs
Procedures, guidelines to highlight key results for modelers (e.g. best case, worst case, pareto optimal)
Lempert, R.J. Measuring global climate risk. Nat. Clim. Chang. 11, 805–806 (2021). https://doi.org/10.1038/s41558-021-01165-9
A key question of this challenge area will be how to demonstrate the value and feasibility of this approach for planning agencies and regulators.
The FHWA Transportation Planning for Uncertain Times report highlights several challenges and pitfalls to implementing a DMDU approach within MPO organizations. This area will focus on the organizational challenges specific to quantitative analysis while coordinating with the relevant TRB committees focused on policy, regulation and planning processes. The FHWA report makes a compelling case for how a “Deliberation with Analysis” mechanism not only improves plans, but also builds legitimacy in modeling practices. One way legitimacy can be built is by facilitating an interaction between planners and models whereby the story from the model is developed out of an iteration of exploration and refinement across uncertainty dimensions and boundaries.
This area may also engage in a survey of how planning agencies are considering other forecast horizons to manage uncertainty and what decisions and considerations work best in a Long Range Transportation Plan (LRTP) context.
Collect, synthesize, and summarize approaches and lessons learned to communicating uncertain forecasts to stakeholders accustomed to point predictions, perhaps drawing on examples from other application areas where information is already provided to the public with measures of uncertainty (e.g. weather forecasts, financial returns)
Highlight examples and case studies where modeling uncertainty brought together a more diverse set of stakeholders through the inclusion of a wider range of model assumptions.
Examples and case studies where the planning process included uncertainty considerations (early on in discovery / needs assessment, as a stress-test to preferred alternatives)
Survey of approaches to mid-range forecasts to complement an LRTP