UNC Extended Range Forecasting Contest
What do we forecast?
Enter 6-day forecasts of daily low and high temperature and probability of precipitation (PoP) at Denver, CO and Albany, NY. Your goal is to beat "PerCy", a no-skill blended forecast consisting of persistence on Day 1 and climatology on Day 6. You can view each city's climatology and generate an estimate of PerCy's forecast using the tool in the left navigation panel.
Submit forecasts any day of week, including weekends. Forecasts are due at 00:00 UTC, verified each day starting at local midnight using the NWS daily climate report for each city.
You may submit up to two forecasts per day since forecasts for each city are considered independent. Previously submitted forecasts for a given day may be revised up until the cutoff time for that day.
To maintain a contest-eligible standing, you need to submit at least 36 forecasts per semester, split evenly between both cities, at a pace of about three forecasts per week for 12 weeks. If you start early, you'll have an advantage in maintaining this pace. I like to submit forecasts for both cities one day each week, and again from home about every other weekend.
How are forecasts scored?
Forecast scores for each variable and lead time are determined as the difference between the mean square error (MSE) of a participating forecaster and the corresponding set of PerCy forecasts:
Forecasters who consistently beat PerCy will have a negative score. This score is sensitive to distance. A forecaster who is consistently pretty good, but never perfect, will likely produce a better score than a forecaster who makes both very good and very poor forecasts. Note that each set of N forecasts (Fi) is compared against their unique set of N Percy forecasts(Pi) corresponding to only those days on which their forecasts were submitted. This helps to adjust for differences between "hard" days and "easy" days.
Note that you enter the probabilty of precipitation instead of the amount. Since measurable precipitation is verified as "yes" or "no", you can only be perfect with a 0% or 100% forecast. But when there is uncertainty, it pays to hedge your bets by choosing a percentage that is in the middle. Trace amounts of precipitation (not measurable) are verified here as "no" precipitation occurred.
Students of forecast verification will note that the MSE for probability of precipitation is formally called the Brier Score. Students should also note that there is no simple way to score a multi-variable forecast. Here we reduce the score to a simple scalar value but proper forecast verification should be conducted using a distributions-oriented approach (e.g., Brooks and Doswell, 1996: WAF 11-3. "A Comparison of Measures-Oriented and Distributions-Oriented Approaches to Forecast Verification".)
Your official contest standing is indicated by your average ranking of all forecast variables. This is understood more easily while your are viewing the results table, viewable by following the "Current Rankings" navigation bar.
Each forecaster is allowed one request to have a forecast be striken from the record after it is verified. In other words, you get one chance to call "Mulligan"! Send e-mail to Dr. Nutter with the subject "Mulligan" to make this request.
Good luck, and have fun forecasting!
This contest would likely still be just an idea Doug Koch (Class of 2010) hadn't written a functional first draft of the code. Thanks, Doug!
Show and Update Forecast Verfication Data - Any forecaster can follow this link to view the verification records, but this linked page is intended for the contest administrator to enter the verification data.