I've had a couple of "interviews" appear online recently.
The first is at the Random Wizard blog. Check out some of his other old-school focused posts while you're there.
The second is the D&D Podcast from May 10 with Mike Mearls, Jeremy Crawford, and Rodney Thompson. We talk about the genesis of 2nd Edition AD&D, Al Qadim, and other topics related to the late '80s and early '90s. It was a fun recording session, and it turned into an excellent podcast, in my biased opinion.
I'm generally a bit reluctant to do these types of things because there's always that part of me that thinks, "aww, people don't really care about what I have to say." If I really believed that, of course, I wouldn't bother with this blog -- yet I constantly prod myself to bother with the Howling Tower more, not less, and interviews and podcasts usually draw positive reactions.
Just two and a half weeks remain until I get on a plane and fly to Dallas for another installment of the North Texas RPG Con. A month ago, my prep work for this show was steaming along at flank speed and I thought I had everything well in hand. Now I'm closing into my usual pre-convention, gotta-get-everything done cram sessions. I'm not hitting the panic button yet; there's still time to get everything finished, if nothing goes wrong and I don't let myself fall into the usual trap.
Friday, May 17, 2013
Wednesday, May 1, 2013
3 Castles Award
Last week I mailed in my ballots for the 3 Castles Award, which will be handed out at the North Texas RPG Con in June. The 2012 winner was Stars Without Number, and the 2011 winner was the Dungeon Alphabet.
The nominees this year were
That’s a strong lineup. I’ve done this type of judging before, both for the 3 Castles Award (2011) and for the Origins Awards. Although nominees for awards are (almost) always impressive, in the past I’ve found it fairly easy to trim the list down to just two or three top contenders that really need to be wrestled with.
But this one was a tough call. Every title offered outstanding features offset by a few weaknesses for the others to exploit.
Judging criteria for the 3 Castles Award are pretty well spelled out. The judges’ instructions are not public, but there are no real surprises involved. Basically, the criteria describe six categories on which to grade the contestants and offer some questions to ask yourself as you’re doing the evaluation. It’s a solid system, and because it’s codified, every judge should wind up considering the entrants on approximately the same merits. Of course, different judges can look at the same features and value them differently—it’s still a subjective process—but that’s why you have a team. At least everyone is looking at the same features and, because the scoresheets are mailed to the central committee for tallying, no one can just say “I like this one best” without showing how they graded the competition.
Instead of reading and then grading each product in turn, I started by getting thoroughly familiar with all five. Then I judged all five titles on category A, wrote those scores on an index card, and placed that card aside, out of view. With category A done, I advanced to category B, and so on. My purpose in doing it this way was to prevent myself from seeing how the scores were shaping up before I was completely done with the rankings. That way, my judgment wouldn’t be biased by a subconscious awareness that one title was pulling ahead on points.
After going through that process and scoring everything as prescribed by the 3 Castles guidelines, I did it again with my own grading system, just to see how the two would compare. Only then did I tally the scores from both systems. Happily, the two systems produced almost identical results. The numbers were different, but the rankings were almost identical. Positions 1, 2, and 3 were the same both times, with 4 and 5 swapping places between the official scoring system and mine.
I was pleasantly surprised that the #1 finisher both times was not the title my gut and first impressions told me was likeliest to come out on top. I consider that validation of both the 3 Castles questionnaire and the decision to hide the ongoing tallies from myself during the process.
All I know at this point is how I scored the five contenders. Like everyone else, I won’t know who the winner is until the prize is handed out in Dallas on June 8. I’m as eager as anyone to see who will take home the award. They’re all high-quality efforts deserving of success, and I thoroughly enjoyed the time that I spent with each of them.
Will I continue playing any of them, now that the judging is done? Sadly, time is so limited and games are so plentiful that the answer probably is no. The one exception is Barrowmaze. I ran a portion of it for my OD&D group last year, using the Chainmail combat system. But I'd like to try it again (with a different group) using D&D Next playtest rules, to see how that goes. I just need to find some players ... and some time ...
The nominees this year were
- Adventurer Conqueror King System (Alexander Macris, Greg Tito, Tavis Allison, Autarch LLC)
- Astonishing Swordsmen & Sorcerers of Hyperborea (Jeffrey Talanian, North Wind Adventures)
- Barrowmaze / Barrowmaze II (Greg Gillespie)
- Cavemaster (Jeff Dee, UNIGames)
- Dungeon Crawl Classics RPG (Joseph Goodman, Goodman Games)
That’s a strong lineup. I’ve done this type of judging before, both for the 3 Castles Award (2011) and for the Origins Awards. Although nominees for awards are (almost) always impressive, in the past I’ve found it fairly easy to trim the list down to just two or three top contenders that really need to be wrestled with.
But this one was a tough call. Every title offered outstanding features offset by a few weaknesses for the others to exploit.
Judging criteria for the 3 Castles Award are pretty well spelled out. The judges’ instructions are not public, but there are no real surprises involved. Basically, the criteria describe six categories on which to grade the contestants and offer some questions to ask yourself as you’re doing the evaluation. It’s a solid system, and because it’s codified, every judge should wind up considering the entrants on approximately the same merits. Of course, different judges can look at the same features and value them differently—it’s still a subjective process—but that’s why you have a team. At least everyone is looking at the same features and, because the scoresheets are mailed to the central committee for tallying, no one can just say “I like this one best” without showing how they graded the competition.
Instead of reading and then grading each product in turn, I started by getting thoroughly familiar with all five. Then I judged all five titles on category A, wrote those scores on an index card, and placed that card aside, out of view. With category A done, I advanced to category B, and so on. My purpose in doing it this way was to prevent myself from seeing how the scores were shaping up before I was completely done with the rankings. That way, my judgment wouldn’t be biased by a subconscious awareness that one title was pulling ahead on points.
After going through that process and scoring everything as prescribed by the 3 Castles guidelines, I did it again with my own grading system, just to see how the two would compare. Only then did I tally the scores from both systems. Happily, the two systems produced almost identical results. The numbers were different, but the rankings were almost identical. Positions 1, 2, and 3 were the same both times, with 4 and 5 swapping places between the official scoring system and mine.
I was pleasantly surprised that the #1 finisher both times was not the title my gut and first impressions told me was likeliest to come out on top. I consider that validation of both the 3 Castles questionnaire and the decision to hide the ongoing tallies from myself during the process.
All I know at this point is how I scored the five contenders. Like everyone else, I won’t know who the winner is until the prize is handed out in Dallas on June 8. I’m as eager as anyone to see who will take home the award. They’re all high-quality efforts deserving of success, and I thoroughly enjoyed the time that I spent with each of them.
Will I continue playing any of them, now that the judging is done? Sadly, time is so limited and games are so plentiful that the answer probably is no. The one exception is Barrowmaze. I ran a portion of it for my OD&D group last year, using the Chainmail combat system. But I'd like to try it again (with a different group) using D&D Next playtest rules, to see how that goes. I just need to find some players ... and some time ...
Subscribe to:
Posts (Atom)