1. I am writing a midterm today. I gave a similar exam last year, entirely in short-answer/essay format. I think it was a good test, but it was a bear to grade. I'm swapping in some multiple-choice questions this time. Writing them is slow.
I think I'm good at writing tough multiple-choice questions. I'm not sure that's a skill I really want to have, but I think I've got it. Yesterday I was filing some papers when I stumbled across midterm evals from my students in the spring. I'd asked them for exam feedback. "Tricky!" said one paper after another. "Tricky! Tricky! Tricky!" They sounded pretty grumpy. I am keeping that in mind as I write this exam.
2.Teaching is a constant reminder that you can't please all the people all the time. I am by nature a pleaser. This is an ongoing tension.
3. I had a manuscript rejected yesterday, but I am not crying into my coffee over it. I submitted it to Fancypants Journal, which rejects more than 90% of its submissions, at the encouragement of a friend who has published in Fancypants Journal multiple times. I was braced for it to get rejected immediately upon receipt in July, but they sent it through peer review. (Good thing #1.) One of the reviews consisted of three dismissive sentences, but the other was lengthy, detailed, and full of helpful ideas presented in an encouraging way. (Good thing #2.) It's so much nicer when a reviewer takes the time to say, "Here's how this could be a better paper."
4. Now here's something that strikes me as weird: both reviewers dinged me for my sample size. I have data for 47 kids. This is a small sample, small enough that I wouldn't fund a study like this one if I were handing out grant money. But it seems to me that the concern should be about study planning, not about study publication.
When you use statistics you're trying to avoid two kinds of mistakes. The first, Type I error, is when you say you've found a meaningful association but it's really just random noise in the data. Your chance of a Type I error is called your alpha, and it's often set at .05 in behavioral sciences. The second kind of mistake, Type II error, is when you say "there's no association here! nothing to see -- move along!" -- even though there actually is an association. That chance is called your beta, and it's sensitive to sample size. The larger your sample size, the smaller your beta. If you look for a subtle effect in a group of 10 people, you're probably less likely to see it than if you're looking in a group of a thousand people. If you say, based on your sample of 10, that the effect doesn't exist, you've got a Type II error going on.
I'm sure my beta was huge, but why does it matter now that we don't have to worry about Type II error? I'm not reporting a null finding: the hypothesized association was right there in my data. My alpha is .05, same as it would be if I had a sample size of 200. To get a significant result in a smaller sample, you have to have a clearer pattern. Isn't it kind of interesting that I found a definite trend in a small sample?
5. One of the things that persuaded me to stop reading Amy Tuteur was the post in which she conflated Type I and Type II error in her critique of a study, because DUDE that's a rookie error. And yet I am getting a similar message from these reviewers. Are we just supposed to think that bigger is better? Am I missing something from a stats point of view?
6. I have been writing multiple-choice questions in between takes and I think I'm done. Hurray! Tweaking the old short-answer questions should be fairly quick, and I'll be good to go. That's a relief.
7. Kids will be home from school any minute and I'm done with my work for the week. This calls for chocolate applesauce muffins, I do believe. (Thanks for the suggestion, Linda!)
Recent Comments