10 Ways to Become a [BAD] Learning Scientist

10 Ways to Become a [BAD] Learning Scientist

So far, in our work towards the?Learning Science?book, Sae Schatz , Geoff Stead , and I have written some thoughtful articles about complex ideas like?Learning Ecosystems,?Social Metacognition, and even the Nature of Knowledge?itself. We’ve tried to provide a thoughtful, practical, and research-grounded narrative.?

No alt text provided for this image

But…?what if there were an easier way… a secret, accelerated path to success that skipped past the lengthy analysis and plodding methodologies?

In today’s post, we’re offering a handy guide that will help you to add sparkle to any idea, and provide the tips you need to wow clients and partners with the thinnest veneer of empiricism and credibility – without all of that boring work.

After all, on the?1st April, why strive and struggle, when there is a shortcut?

Read on for out top 10, all powerful tips that can turn your lacklustre L&D report into a powerful, scientific research paper, to impress your friends, and wow your boss:

Part 1: Finding Published Research

Our first ideas involve creative ways to use published research (the best sort, right?) to build on an academic foundation and prior evidence. These tips will help you create [the appearance of] rigour and quality, and let everyone know you’ve done your ‘due diligence’. Pretty much whatever you are looking at, you can find some relevant articles to support it… Here’s how:?

  • [1] Use evidence from individual laboratory studies (because the real world is just like a lab).?Nearly every concept in L&D has been researched… somewhere. Frequently, these studies are conducted in deliberately constructed environments with the object in question (such as a particular instructional method) carefully isolated for evaluation in a helpful (unrealistic) vacuum. Some of these studies include just a handful of participants, and about half of the time, their positive results are just a statistical?fluke?[1] — so you’re certain to find a publication with some shiny statistics to support whatever you might claim! When you have found it, just paste the citation into every document.
  • [2] Be creative in how you generalise research (because ‘context’ is just a detail really).?Closely related to the recommendation above, this next piece of advice is to apply research findings broadly and into new spaces. Don’t worry about the populations involved, whether the learning conditions were realistic, or if the results are replicable. Just search online, find an article with good numbers, et voilà! If you need a good example of this, just look at the work on Growth Mindset: Empirically examined in one domain and context, and then widely generalised as if it were a universal ‘thing’.
  • [3] Emphasise the ‘statistical significance’ (because like probably 99% of the time you’ll be right).?A lot of people aren’t well-versed in parametric statistics, but most people in the L&D community have probably heard of ‘alpha’ or ‘p-value.’ Your best approach is to showcase that statistic prominently, and when you cite foundational research, make sure to emphasise its p-value (for example, “p?< 0.05”). Consider adding exclamation marks! We advise liberally using the phrase ‘statistically significant’ or even just ‘significant’ when referring to research with a p-value of less than 0.05. Using the word ‘significant’ lends credibility to the research findings. Consider this the research equivalent of ‘artisanal’ (when describing cheese) or ‘craft’ (when discussing beer).

Part 2: Creating Original Research

Next, you probably need to do some ‘validation’ research on your own, so you can ‘prove’ that your specific L&D offering is effective. Here are some tips for getting the best results:?

  • [4] Use a pretest/posttest design (because ‘better’ is ‘better’ in every case).?Let’s say you’ve made a new training program and want to show how awesome it is. (Think: performance review coming up soon.) You need to collect some training outcomes. Here’s how to make sure they look good. First, give participants a pretest, then do your training, and afterwards give them a posttest. Don’t worry too much about what happens in between the tests. You’ll probably get a medium?effect size?improvement?[2] just from the?retest effect?[3] alone – which is like free progress really. And magically, this works best if the two tests are the same, but that’s not actually even a requirement. Adding some test-prep and coaching into your training, you’ll get even bigger results!
  • [5] Use Placebo and Hawthorne effects liberally (because if you can measure it, it counts).?The medical community has?studied?[4] placebos extensively and found them to have massive impacts. Although the percentage varies depending on a study’s purpose and participants, it’s often?around 20–30% – but can be upwards of 72%?[5]. A related organisational phenomenon is the?Hawthorne effect?[6], which basically shows that when workers are given special attention and observed, their performance increases. So, you can easily find impressive results simply by creating an intervention that piques learners’ Placebo/Hawthorne responses. Just fuel their expectations, give them some attention, and make sure they know that you’re watching. This technique is particularly useful as it liberates you from actually creating effective learning.
  • [6] Design your experiment for success (because, why take the chance?).?Once you have your pre- and posttests and Placebo/Hawthorne triggers ready to go, it’s time to create the study’s protocol. Some experimental designs work better than others. Specifically,?you’ll have bigger effect sizes?[7] if you use (a)?correlational or quasi-experimental designs?(in other words, avoid participant randomization and blind/double-blind assignments!), (b)?proximal testing?(evaluations that closely mirror the intervention and are completed close to it, like a written posttest completed shortly after training), and (c) a?small population?(stick to fewer than 500 people). Remember: we use veneers because they allow us to use a valuable material (the beautiful veneer) very cost effectively. Think of your time as the veneer: the more you save, the more of The Mandolorian you can catch up on later.
  • [7] Count everything (because MEASUREMENT FOR THE WIN).?We’ve already talked about collecting pre and posttest outcomes, but you’ll need more than that! Collect data on everything, so that you have a lot to play with after the experiment. Start by asking for detailed demographic data, because you might find your experiment works best for left-handed, bilingual women ages 25 to 50 – so you’ll need all of those variables in-hand to find that needle in the haystack. Next, collect data on anything that’s countable, for example, number of hours spent in training or number of words read. You can also selectively count parts of self-response surveys, such as the number of items rated above ‘satisfactory’.?
  • [8] Use statistical tricks (because it’s not cheating if it’s just maths).?If you’ve followed our prior recommendations, then you already have some impressive results, but if you’re still struggling (or want to boost the results further), you can massage the data. There’s a large toolkit of?data-dredging hacks?[8] that (bad) scientists have perfected over the years, such as?p-hacking?[9]?(manipulating the statistics to get a suitable p-value),?fishing?(playing with the statistics until some superficially nice-looking result appears, whatever it might be), or simply?continuing to run the experiment?[10] until you get enough data to support some desired result. This is all good: after all, what’s the point of putting in the effort unless you can show success? Nobody ever learnt from getting anything wrong. And that’s a [statistically significant] fact!

Part 3: Communicating About Your Amazing, Evidence-Based Results

We’ve made great progress so far. Using the foundations of western scientific methodology, we’ve been able to add real value at minimal effort. But there’s one more step. After you’ve assembled a supportive literature review and conducted your own empirical testing, it’s time to share your results. There are plenty of good guides to writing (bad) research articles, with excellent advice such as, “never explain the objectives of the paper in a single sentence…in particular never at the beginning” [11] and make sure to “use different terms for the same thing” [12]. In addition to that great guidance, we’ll add two more suggestions:

  • [9] Build on personal experience (because feeling IS believing).?People love personal stories, and we’ve all experienced education and training before – so, we’re all mini-experts on the subject of learning. Work with that. Use your own experiences or, even better, reference common human experiences as naturalistic evidence. After all, we’re all humans, and we all think and learn in the same ways. So, these common experiences will help people relate to your new L&D idea. Draw readers or customers in with anecdotes about personal experiences, and then generalise from those experiences to help explain and support your concept.
  • [10] Use snazzy terminology (because with a growth mindset, we can be neuro-informed):?Like a well-tailored suit on a businessperson, certain words add polish that can make or break your L&D idea. At a minimum, make sure to use both ‘Machine Learning’ (ML) and ‘Artificial Intelligence’ [13] (AI). (Don’t worry if you don’t actually use AI because?a lot of so-called AI startups don’t either! [14]) Next, pick a few L&D terms that describe your idea or offering. Finally, include a few classic innovation words, like ‘emerging’ or ‘cutting-edge’, so that people know this is a new concept. Don’t worry if this seems like hard work: Sae has put together a table to help. Start with the following prompt, and then select a word from each column to fill it in:

Our concept uses?AI/ML?and [column 1], [column 2] [column 3] to optimise [column 4].

No alt text provided for this image

Conclusion

To summarise: if you follow these 10 steps you should be well positioned to become a published learning scientist with a spate of innovative, evidence-based AI/ML concepts (among other more questionable descriptors) tied to your name.?

BONUS tip!

  • [11] Make beautiful data visualisations.?Any data represented in an infographic is automatically more valid than a table. Ideally you should embellish your presentations with animations. To avoid confusion, eliminate distractions such as standard deviation notions or error bars, which just get in the way of a good story. Instead, opt for basic graphs wherever possible, like bar charts with just one or two items. You can, for example, make a dashboard of vanity metrics such as number of hours spent learning, smile-sheet scores, change in pre- to posttest results. Basically, anything that is countable can be included (so long as the numbers look right, of course). And use orange, because it’s a warm colour, and everyone loves a winner.

References

[1]?https://fantasticanachronism.com/2020/09/11/whats-wrong-with-social-science-and-how-to-fix-it/

[2]?https://www.illuminateed.com/blog/2017/06/effect-size-educational-research-use/

[3]?https://onlinelibrary.wiley.com/doi/full/10.1002/ets2.12300

[4] https://d1wqtxts1xzle7.cloudfront.net/77671214/j.drudis.2007.03.00720211230-10882-xum534-libre.pdf?1640853596=&response-content-disposition=inline%3B+filename%3DPlacebo_new_insights_into_an_old_enigma.pdf&Expires=1676574955&Signature=eOStJeYf11dfXpZt~gSkAdBPfLboIFzGnjbWOEP9j4ruGDVc4qAr952isl~seX~o3znvFln4YlleFG~hVexXkKmX6tQbHqjHDHRWEq4HVOWUfedRY3zx~cEkm0f~HPQ0sIqdYfNbSh3AZrTsYGOobfi35-I4Lr4OxRcHulJhuRxyuCWkaIm82gT9BsoKDa-kGWe~vP1YPZ-UxCSjmOEwkF3qBAQFrdbPeWJ~h-7nZ7bYgDw8kLAI5cyVRDt8Q8mVZeEOUIuxmDACOAo84d4lBs-4M~6CXP2trzyOFLzGslDikAb6EqFjrrbbDbqrv~MvvuwQP3Kj4XdaR4jefSM74A__&Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA

[5] https://d1wqtxts1xzle7.cloudfront.net/60248249/Placebo_Effect_and_Athletes.1020190809-76103-1fcw8va-libre.pdf?1565382475=&response-content-disposition=inline%3B+filename%3DPlacebo_Effect_and_Athletes.pdf&Expires=1676575403&Signature=dhfdivlypEaqvKtt~4i2406TQsKiJQwcBkVMZftYIIByr4oFnhB8O9FZW7BZe-4B~UXuu~1xtQjOBlGEvb066rbELjugtVM01oBDkK4kL95ORPnVD~hp7~ta16YSMvWAD-GJZfEimFVX1YHH3awt7Uvqg8hDGtQ-v58nuYgYck3lWjqcZIscLaLKgQeIutDFtQ26wgNteXMQfvet3d~Elz081ZRQ8ZZCoKQhgQW4EOV1ldyMtUZI-mhFNJU6OMOQKvMskNaB2H-uIQQS1c2miQ84r~6-TTMNa6n1xIS0syyu2Fma~d2fZEddlX0LCVx7tMoCUAC643DWbtkYE3CQZg__&Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA

[6]?https://psycnet.apa.org/record/2000-13580-004

[7]?https://evidenceforlearning.org.au/news/effect-sizes-in-education-bigger-is-better-right

[8]?https://catalogofbias.org/biases/data-dredging-bias/

[9]?https://files.de-1.osf.io/v1/resources/xy2dk/providers/osfstorage/623224d733d8540487f8ad21?action=download&direct&version=2

[10]?https://theness.com/neurologicablog/index.php/p-hacking-and-other-statistical-sins/

[11]?https://pubs.acs.org/doi/10.1021/ac2000169

[12]?https://www.elsevier.com/connect/authors-update/10-tips-for-writing-a-truly-terrible-journal-article

[13]?https://www.verdict.co.uk/ai-in-education-buzzwords-hyperbole/

[14]?https://www.theverge.com/2019/3/5/18251326/ai-startups-europe-fake-40-percent-mmc-report

Dr. Nigel Paine

Co-Presenter @ Learning Now TV | Dprof. in Learning And Development

1 年

Excellent work Julian: even on April 1st your brain is working away creativity and mischeviously.

回复
Karthick Richard

Master of none; adequately functional at a few

1 年

#12, tell a story. Everyone loves a good story.

Geoff Stead

Product strategy and innovation. CPO at MyTutor. Author of Engines of Engagement

1 年

Lots of inspiration to be had here…. (And a few giggles!)

要查看或添加评论,请登录

社区洞察

其他会员也浏览了