Select a domain. Machine Learning researchers should pick “ML papers”:
Active
Archived
Why participate?
You get feedback. Fermi arithmetic problems cover geography, science, and general knowledge. Political Fact-Checking covers many topics in US politics. You get feedback on your accuracy and calibration. (Calibration means when you judge things "90% likely" they actually happen roughly 90% of the time.)
You help advance research on using machine learning to predict judgments after deliberation from cheaper signals.
What kind of feedback will I get?
For Fermi arithmetic and fact-checking political statements:
Are your predictions well-calibrated?
Perfect calibration
Does your accuracy and calibration improve with practice?
For ML papers:
Based on your judgments, we’ll send you recommendations for paper you might like.
Who's behind this?
We are a team of researchers from Oxford University, Stanford University, and Ought. We are researching hybrid AI systems that combine machine learning with human reasoning. Our goal is to help build systems that predict human preferences after deliberation. Read more here.
Contact: Owain Evans, Research Scientist, University of Oxford (owaine@gmail.com)