Unlock the Lab Unlock the Lab

Your Guide to Reading Science Like a Scientist

"Chocolate makes you smarter!"β€”Headlines promise miracles and threaten disasters, but what does the actual science say?

In this interactive workshop, you'll become a science detective. You'll evaluate real-world research scenarios using the same criteria professional scientists use to judge quality.

🎯 Your Mission

Dual Challenge: For each study, you must:

  1. Predict the crowd β€” Guess what others will rate it on average
  2. Judge the science β€” Rate quality using expert criteria (1-7)

Scoring: Points awarded based on prediction accuracy. Can you think like a scientist AND understand how the public perceives research?

Why both? This dual approach allows you to express your own beliefs about research quality while also reflecting on how those beliefs might differ from others' perspectives.

πŸ“š What You'll Learn:

  • βœ… Spot red flags in research (predatory journals, hidden data, clickbait)
  • βœ… Distinguish good science from marketing disguised as research
  • βœ… Understand why open access and transparency matter
  • βœ… Evaluate methodology, sample sizes, and conclusions critically

πŸ’‘ First, you'll learn the tools. Then, you'll test your skills on real scenarios!

πŸ† Leaderboard & Anonymous Usernames

Rate at least 12 papers to appear on the public leaderboard, and to review and compare your ratings with the crowd at the end.

  • You'll be assigned a unique anonymous username (e.g., "Red Fox", "Wise Owl") automatically.
  • Your username will be displayed on the results page at the end.
  • Rankings are based on how accurately you predict the crowd's average ratings.
  • Earn badges (πŸ†πŸ₯‡πŸ₯ˆπŸ₯‰) based on your prediction accuracy score.

View the live leaderboard at any time on the public dashboard.

πŸ“Š Data Use Notice

By participating in this task, you agree that your anonymous data will be used for research purposes and may be shared and analysed.

  • Your data is identified only by an automatically assigned anonymous username.
  • Your username will be displayed on the results page at the end.
  • You need not share this username with anyone if you do not wish to.
  • No demographic details or personally identifiable information is collected.

All data is fully anonymised and used solely for understanding how people evaluate research quality.

Author: Dr Pablo Bernabeu, Department of Education, University of Oxford

Legal disclaimer: This app and workshop were created by Dr Pablo Bernabeu in a personal capacity during spare time. The employer is not affiliated with, does not endorse, and bears no liability for this app or workshop.

Materials: github.com/pablobernabeu/Unlock_the_Lab

Licence: CC BY 4.0 β€” Free to use with attribution

πŸ’‘ Contributions welcome! Suggest improvements or report issues on GitHub.

Admin

πŸ“š Before We Begin

πŸ“– Take your time with these resources! They're the foundation for evaluating research quality. You can return to them anytime using the πŸ“– button.

πŸ“š Scientific Glossary

Key terms you'll encounter when evaluating research:

βš–οΈ Quality Rating Rubric (1-7 Scale)

Use these six criteria to rate each study. Consider all aspects together for your final score:

βš–οΈ What Matters Most to You?

Instructions

You have 20 tokens to distribute across the six criteria you used to evaluate the studies. The more tokens you assign to a criterion, the more important it is to you.

How to use: Tap any box in a row to fill all boxes from the left up to that position. Tap the last filled box again to clear that criterion's tokens.

You must use all 20 tokens before you can proceed to your results.

Tokens used: 0 / 20

πŸŽ‰ Workshop Complete!

🎯 Part 1: Your Prediction Performance

How well did you predict what others would rate each study?

Your Prediction Accuracy Score

0
Calculating rank...

This score measures how accurately you predicted the crowd's average ratings. Higher scores mean better predictions!

πŸ† Top Predictors

See how your prediction accuracy compares to other participants (includes only participants who completed at least 12 ratings)

Loading...

πŸ“Š Part 2: Your Rating Profile

How did your own quality ratings compare to the crowd's ratings?

βš–οΈ Part 3: Your Criterion Weightings

Here is how you weighted the six evaluation criteria.

Loading...

You've evaluated 0 research papers!

  • πŸ”“ Open Access β€” Paywalls hide science from public scrutiny. Demand transparency!
  • 🚨 Spotting Clickbait β€” Sensational titles usually mean weak science underneath
  • πŸ”¬ Methodology Matters β€” Sample sizes, controls, and open data separate real from fake
  • πŸ“š Source Credibility β€” Predatory journals publish anything for money. Trust matters
  • βš–οΈ Nuanced Conclusions β€” Good scientists admit what they don't know. Beware certainty!
  • 🌐 Crowd vs. Experts β€” You learned how public perception differs from scientific quality

Keep Exploring Science!

When reading science news:

  • Look for the original research paper
  • Check if data is openly available
  • Question sensational claims
  • Consider the source and methodology

Curious About the Results?

See how all participants rated the studies, including average scores, rating distributions, and agreement levels across the group.

πŸ“Š View Live Dashboard

Real-time ratings and statistics from all participants

πŸ’Ύ Save Your Results

Your username:

Want to review your personalized analysis later? Save your results link now! You can return to this page anytime to see:

  • Your rating profile and statistical comparison
  • How your ratings compare to the crowd average
  • Your complete prediction accuracy scores

Or visit the live dashboard and enter your name () to find your results.

Thank you for participating! πŸ”¬βœ¨

πŸ“‹ Workshop Information

Author: Dr Pablo Bernabeu
Department of Education, University of Oxford

App Materials: github.com/pablobernabeu/Unlock_the_Lab

Licence: CC BY 4.0 Attribution β€” You're free to use and adapt these materials provided you acknowledge the authorship.