I recently read "Amos Walker: The Complete Story Collection" and "The Left-handed Dollar" by Loren Estleman. Amos Walker is a fictional Detroit PI introduced by Estleman in a 1980 novel ("Motor City Blue"). "The Left-handed Dollar" is the latest of 20 novels (many of which I have also read) in the series. I found it typical, solid but in my opinion somewhat below the level of the best in the genre. I liked the short story collection a bit more. It collects the 32 previously published Walker short stories and adds a new story and an introduction by Estleman (totaling some 600+ pages). I found the stories curiously addictive, it was tempting to read just one more (sort of like potato chips). I generally don't like short stories that much but a series of short stories like this is a bit different in that it is sort of like a very episodic novel. I found I rather liked the form. The early PI writers Hammett and Chandler wrote numerous short stories which I also liked when collected but the short story seems to have largely fallen out of favor since (for market reasons with the decline of the pulp magazines).

The stories have some faults and aren't for everybody. As is traditional in the genre they exhibit a rather dark view of human nature which didn't bother me but some readers might find a bit wearing particularly when repeated for 600 pages. The plots sometimes don't hold together if you think about them too much. Walker is prone to sociological asides which I sometimes found bizarre and others might find offensive. And I didn't like the way the stories were ordered. Estleman says in the introduction that they are arranged roughly chronologically but they still jump back and forth in time a lot which I found a bit jarring. I would have preferred a strict chronological order. However I liked the collection and if you also like PI stories (of the traditional "hard-boiled" variety) you might give it a try.

## Sunday, February 20, 2011

## Sunday, February 13, 2011

### Chuck Tanner RIP

As I was driving home Friday night I heard on the radio that Chuck Tanner had died . I was a bit sad to hear this as I had followed his career from a distance since playing a game of postal chess with him long ago when I was a teenager and he was a minor league baseball manager. Shortly thereafter he had made the jump to managing in the big leagues where he was moderately successful including leading the 1979 Pittsburgh Pirates to a World Series Championship.

I believe our game was in one of the Golden Knights tournaments. About the only detail I remember is that at one point he wrote something about not liking the fact that in chess you could concentrate for hours and than spoil a game with a small oversight and I had made some reply about dropping a pop up in the ninth inning.

I believe our game was in one of the Golden Knights tournaments. About the only detail I remember is that at one point he wrote something about not liking the fact that in chess you could concentrate for hours and than spoil a game with a small oversight and I had made some reply about dropping a pop up in the ninth inning.

## Sunday, February 6, 2011

### Grading Teachers

This post is a follow up to my earlier post criticizing a paper, "The Economic Effect of Higher Teacher Quality", by Hanushek in which I address an issue raised in the comments.

Evaluating teachers based on how well their students do academically is difficult because teachers aren't (currently in the US) the only or even the most important factor in student success. So to properly measure the effect of teachers you first have to account as best you can for the other important factors. This is typically done by constructing a model which predicts how well a student will do based on everything besides their current teacher and then comparing this to their actual performance. This prediction will assume in effect that their current teacher is average. Any difference in actual performance is then attributed to the teacher being above or below average (depending on whether the student did better or worse than predicted). Clearly this will be very unreliable for a single student but the hope is that when averaged over many students the errors will tend to cancel out allowing differences in teacher quality to be detected.

What do these models look like. The most important factor in how well a student does is the characteristics of the student themself. This means things like how smart they are (in terms of IQ), how well educated their parents are, what their household income is etc. Such differences among students are far more important in predicting how well they will do academically than differences among their teachers. Since we are generally interested in evaluating the effect of a teacher over a school year another important factor is how well the student has done previously. If you are evaluating a third grade teacher and at the beginning of third grade a particular student is 4 months above their predicted grade level this must be accounted for when predicting where that student will be at the end of third grade (assuming an average teacher). Empirically students doing better or worse than expected at the beginning of a school year will still be doing better or worse than otherwise expected at the end of the school year but not by as much. So for example the student 4 months ahead of their predicted grade level might be expected to be 2 months ahead of their predicted grade level a year later. Similarly a student 4 months behind might be expected to be 2 months behind a year later. Models typically account for this by including a decay factor r so a student x months ahead of their expected grade level at the beginning of a school year will be predicted to be r*x months ahead of their otherwise expected grade level at the end of the school year. Note such models predict the difference n years later will (r**n)*x, this is a consequence of iterating the model predictions.

Hanushek uses such a model in his paper but does not appear to understand the implication noted above that any good or bad effects of for example 1-3 grade teachers will have largely decayed away by the time their students leave school and enter the work force. Hence as I noted before he does not appear to be correctly computing the predicted economic effects based on his own model assumptions.

Evaluating teachers based on how well their students do academically is difficult because teachers aren't (currently in the US) the only or even the most important factor in student success. So to properly measure the effect of teachers you first have to account as best you can for the other important factors. This is typically done by constructing a model which predicts how well a student will do based on everything besides their current teacher and then comparing this to their actual performance. This prediction will assume in effect that their current teacher is average. Any difference in actual performance is then attributed to the teacher being above or below average (depending on whether the student did better or worse than predicted). Clearly this will be very unreliable for a single student but the hope is that when averaged over many students the errors will tend to cancel out allowing differences in teacher quality to be detected.

What do these models look like. The most important factor in how well a student does is the characteristics of the student themself. This means things like how smart they are (in terms of IQ), how well educated their parents are, what their household income is etc. Such differences among students are far more important in predicting how well they will do academically than differences among their teachers. Since we are generally interested in evaluating the effect of a teacher over a school year another important factor is how well the student has done previously. If you are evaluating a third grade teacher and at the beginning of third grade a particular student is 4 months above their predicted grade level this must be accounted for when predicting where that student will be at the end of third grade (assuming an average teacher). Empirically students doing better or worse than expected at the beginning of a school year will still be doing better or worse than otherwise expected at the end of the school year but not by as much. So for example the student 4 months ahead of their predicted grade level might be expected to be 2 months ahead of their predicted grade level a year later. Similarly a student 4 months behind might be expected to be 2 months behind a year later. Models typically account for this by including a decay factor r so a student x months ahead of their expected grade level at the beginning of a school year will be predicted to be r*x months ahead of their otherwise expected grade level at the end of the school year. Note such models predict the difference n years later will (r**n)*x, this is a consequence of iterating the model predictions.

Hanushek uses such a model in his paper but does not appear to understand the implication noted above that any good or bad effects of for example 1-3 grade teachers will have largely decayed away by the time their students leave school and enter the work force. Hence as I noted before he does not appear to be correctly computing the predicted economic effects based on his own model assumptions.

Subscribe to:
Posts (Atom)