As a check on Part I and my rating system, here is a visual look at where the career awards and accolades discussed in Part II are distributed among the 5-Star rating system. Basically, this is just a section that I ended up cutting from Part II that helps show that the system I’ve been using has produced realistic results. I figured I may as well share in case you were questioning the statistics underlying the conclusions.
With a radar or “spider” chart, you can see the relative distribution changes. The shaded areas indicate the distribution, and the simplest way to think of these is to imagine your reading a clock. There are six sectors, with the 5-Star rating at 12 o’clock position. Starting with the MVP Chart, you can see that the shade area is contained almost entirely within the first sector. As you read through the seven charts above you’ll see the shading (distribution) creep clockwise, encompassing more of the lower ratings.
There are two things that I see in these spiderwebs that give me some comfort. First is the “rotation of the clock,” as noted above. The hurdles for the MVP, HOF or All-NBA are higher than All-Star, so if the 5-Star scale is in sync — or at least in general agreement — with the subjective awards, we should see that clockwise spin. (Note: All-Defense would dip the furthest down the scale, not because the players who make that team are worse, but because there is no statistical system that I’m aware of that accurately, or even adequately, quantifies defensive contribution.)
The second area of comfort is that the award winners are concentrated higher on the scale. The only award with a significant concentration at 2-Star or below is All-Defense, which can again be rationalized by pointing out the lack of adequate defensive statistics.
All in all, the 5-Star scale, in my opinion, passes the sniff test.