I. Introduction
Every game matters. At least, that’s what we’re told, right?
When the clock turns to March, and the 12 jurors that make up the NCAA Tournament selection committee have the unenviable task of comparing the résumés of hundreds of basketball teams across thousands of games, in order to create the perfect 68-team March Madness bracket, all in the span of less than a week, while games are still being played, it’s sometimes hard to remember what exactly went down in every single one of those contests.
But have no fear! The committee makes it easier on itself with a number of nifty tools that crunch thousands of games’ worth of data into manageable bullet points to discuss. If you’re somewhat in tune with how bracketology works, then you’re probably familiar with most of these—the NET rankings, the quadrant system, strength of schedule numbers, and a variety of results-based and efficiency-based metrics, which all come together to form an easily digestible team profile that can be compared and contrasted with the rest of Division I’s 364 squads.
This is a great way to compact all those tournament résumés into something easy to handle, right? Well, mostly. Problem is, many of the most important data points that the committee considers are severely flawed.
Take the quadrant system, for example. Why is a road win over the NET #1 team treated the same as a home win over the NET #30 team? And predictive metrics like KenPom and T-Rank don’t actually factor in the outcome of games played, while performance metrics like SOR and KPI don’t provide much context for what happened inside the win-loss results that they measure.
Games and the data within them are not given the proper weight, and, thus, not every game matters, despite what we’re often told.
No metric that the committee relies on adequately captures the whole picture. So that’s exactly what the Bauertology Résumé Calculation Tool, or BRCT (pronounced like “bracket”), aims to do.
II. The Problem
If you’re a longtime reader of the Bauertology blog and the idea of “BRCT” sounds suspiciously familiar, you’re not imagining it. It was about two years ago that I brought BRCT to the public in this prior post. Originally, BRCT was simply a bracketology supplement that used the concept of “what feels right” to try to best match the imperfect reasoning of the selection committee, in order to give bracketologists an easier time comparing résumés between teams.
But as I tried to keep that version of BRCT up to snuff, a plethora of problems soon arose. Firstly, no computer formula is ever going to match what a dozen human minds are thinking and feeling about dozens and dozens of teams. (I finally learned that lesson when Nevada was handed a 10 seed last year, despite just about every measure in the land saying that the Wolf Pack should have been slotted about three seed lines higher, a.k.a., where I had them seeded.) And as the season went on, the task of trying to balance BRCT’s scoring system between growing, countable statistics (like quadrant wins and losses) and shifting, uncountable rankings (like metrics and strength of schedule) proved too difficult.
So I went back to the drawing board with BRCT, keeping nothing but the name. (I mean, how could I ditch a perfect name like that?) I ultimately decided that trying to create a logical formula that could predict what a group of often-illogical humans are thinking was a fool’s errand.
But instead of trying to play prognosticator, what if we instead used BRCT to turn this all-important NET ranking system that the NCAA continually shoves in our faces into a résumé formula worth calculating?
Before the advent of WAB (i.e., Wins Above Bubble, originally coined by Seth Burn and popularized by Bart Torvik, a résumé metric that operates on Torvik’s T-Rank system, before the NCAA borrowed the name for their own NET-based metric, which was officially added to the selection committee’s team sheets this past summer), the NCAA did not have a results-based metric that used the NET rankings as its basis. This always struck me as strange. If you’re going to have this new ranking system, and then repeatedly say that a team’s own NET ranking doesn’t matter, but said team’s opponents’ NET numbers sure do, then why not also implement a résumé metric that puts those opponents’ NETs to good use?
I’ve said this before plenty of times, but it bears reiterating: The NET is a pretty darn good sorting system that I will happily defend. While we most often refer to the NET rankings as just that, a “ranking,” it’s really not a ranking at all. The NET is a sorting tool that puts the 364 Division I teams in some kind of order according to their inherent quality. It is in this way that NET is extremely similar to predictive metrics (also called “efficiency” or “quality” metrics) like KenPom or T-Rank. These predictive metrics don’t particularly care about whether you won or lost your games. They dive into the granular basis of how you performed on your individual possessions (scoring on your offensive ones, and preventing your opponent from scoring on your defensive ones), adjusted for your strength of schedule. These numbers are then used to predict how you will perform going forward. And while they’re not always 100% accurate, they tend to get pretty darn close. (There’s a reason we’re still talking about KenPom in everyday college basketball conversation nearly thirty years after its creation).
So NET, like KenPom and other efficiency metrics, measures the inherent quality of your team. Great! But of course, the inherent quality of your team only gets you so far. What matters more is putting that inherent quality to good use by winning some basketball games. This is why the selection committee has emphasized time and again that they care more about your opponent’s NET than your own. Efficiency rankings only say that you’re good at basketball and deserve a tournament spot. Résumé rankings, which factor in the actual outcomes of games, prove it. And that’s why the résumé metrics (also known as “results” or “performance” metrics), like SOR and KPI, tend to have a much higher correlation with selection to the NCAA Tournament than the NET and other predictive measures do.
This emphasis on résumé is also where the quadrant system comes into play. The quadrants are a group of four buckets into which wins and losses are placed, helping the selection committee to visualize which victories and defeats are more meaningful than others.
In case you’re unfamiliar, the quadrants break down like so, where the numbers next to Home/Neutral/Away represent the opponent’s NET ranking:
- Quad 1: Home 1-30, Neutral 1-50, Away 1-75
- Quad 2: Home 31-75, Neutral 51-100, Away 76-135
- Quad 3: Home 76-160, Neutral 101-200, Away 136-240
- Quad 4: Home 161-364, Neutral 201-364, Away 241-364
But to fully understand the quadrants, it’s also critical to have a basic comprehension of how they work in relation to NET. I’ve got to admit, I’m pretty tired of hearing things like, “Why is so-and-so ranked so low in NET? They have so many Quad 1 wins!” You’re proving to me that you don’t know jack about the selection process if you say something like that! You need to understand: the NET creates the quadrants; the quadrants do not create the NET. The wins and losses that are sorted into the four quadrant buckets are based on the inherent quality (i.e., the NET ranking) of said opponent—there’s your explanation for why is NET is a sorting tool, not a ranking—as well as where the game took place (home, away, or neutral court).
And, hey, this quadrant system that depends on the NET is far better than the RPI system of old, which merely relied on pure winning percentage of your opponents and your opponents’ opponents with no consideration at all for inherent quality, which is so critical for actually determining what should be considered a “good win” or a “bad loss.”
But it’s also extremely imperfect. The cutoffs between quads are incredibly arbitrary and are simply inserted as nice, round numbers to make it all look clean. Why should a home win over NET #30 (Quad 1) count that much more than a home win over NET #31 (Quad 2)? It may not seem like such a big deal at first, but when you get into the nitty gritty of bracketology and you start comparing résumés where the margin of difference is often razor-thin, the separation between a Quad 1 and Quad 2 win can hold a lot more value than initially thought. Similarly, a road win over the NET #1 team in the land and a road win over the NET #75 team being valued the same is just silly.
(The top two quads do actually break down further into Quad 1A, Quad 1B, Quad 2A, and Quad 2B to help defray this situation a little bit, but it’s really just inserting even more arbitrary cutoffs into a system in which there are already too many.)
So the quadrants are good—an improvement from what we used before—but they can also be a lot better.
Instead of placing these arbitrary cutoffs between specific NET ranks, what if we put the whole NET system on a scale, and gave wins and losses a score—let’s say somewhere between 50 and -50—for every individual game?
And what if we could then add up those individual game scores in totality, alongside other important factors like game location and margin of victory or defeat, to come up with a single NET-based number that determines how deserving a team is of earning an at-large bid?
That’s where it’s BRCT’s time to shine.
III. The Solution, or How BRCT Works
Believe it or not, the idea of a résumé metric that scores things through a scale on a game-by-game basis is not new at all. KPI (Kevin Pauga Index), an official team sheet metric, does this as well, putting individual game performances on a scale from 1 to -1. But whereas KPI feeds off its own rankings and gets complicated with things like pace of play and opponents’ strength of schedule, BRCT keeps it simple; there are only four factors that we’re concerned with: the opponent’s NET, the location of the game (home, neutral, or away), the result (win or loss), and the final score.
To explain how BRCT works, we’ll start with how individual results are scored according to opponent strength. Time to get exponential!
To get a proper idea of how wins and losses should be valued, we need to implement an exponential scale. This is the lifeblood of how BRCT operates. After all, it’s not fair to put all 364 teams on a linear scale—the difference between a win over the NET #1 team and the NET #30 team is far greater than the difference between a win over #300 and #330, and that ought to be reflected in some way.
(What would really be great is if the NCAA released the full NET scores for each team as opposed to just the ranks, so we can see how big the quality gap between team #1 and team #30 actually is. But until that day, this exponential system is the best that we can do.)
So, let’s get to it. Let’s say we want to determine how good a win over the NET #1 team should be viewed in comparison to a win over teams at other NET ranks. Here’s the exponential formula that BRCT uses to measure such a thing:
(45/131769)*(X-364)^2
- X = opponent’s NET
That formula may look complex, but the calculation is not. You simply plug the opponent’s NET rank into the variable where X is, and you get a score for how good that win is! Thus, a win over the #1 team earns you 45 points toward your résumé. A win over the #30 team earns you about 38 points. Beating up on the NET #330 team does you barely any good and earns you less than a single point. And it goes all the way down to NET #364, the very worst DI team in the country, earning you a whopping 0 points for picking on last place.
This is a good start! But we’re not close to finished. An important consideration that the quadrant system makes is game location, as it’s much more difficult to beat a team in their own building than it is to beat them on your own court. So we save that formula from above for neutral-court victories, and now we introduce these small adjustments for home and road contests:
For a home win: (45/131769)*(X-364)^2*(8/9)
For an away win: (45/131769)*(X-364)^2*(10/9)
These subtle adjustments make it so an away victory is more lucrative and a home victory is less so. So, if we plug in that NET #1 opponent, for which a neutral-court victory would ordinarily be worth 45 points, we find that beating said opponent at home is worth 40 points, while besting them on their own floor is worth 50. And that means that 50 points, i.e., a road win over the #1 team, is the upper boundary for this per-game résumé calculation system. (Why 50? We’ll get to that shortly.)
That’s how BRCT works for wins; now we need to figure out losses. And we should operate on the same principle that the gap between a loss to the NET #1 team (completely forgivable) and the #30 team (fairly forgivable) should be much wider than the gap between a loss to the #300 and #330 teams (horrendous in both instances).
And we still employ the same concept for home/neutral/away; we just have to flip the adjustment fractions. You get a slight reduction in penalty for a road defeat and a slight amplification in penalty for taking an L in your own dojo. Here’s what those formulas end up looking like:
For a home loss: ((45/131769)*(X-364)^2-45)*(10/9)
For a neutral loss: (45/131769)*(X-364)^2-45
For an away loss: ((45/131769)*(X-364)^2-45)*(8/9)
It’s a simple translation of the exponential victory graph—just add that -45 to the end of the base formula to get the defeat penalty that we want.
So let’s say you make a big oopsie and lose to the worst team in the country, NET #364, on a neutral court. That’s -45 points for you! Now let’s say you lost to said terrible team on their own floor—slightly more forgivable, but still atrocious. That equals -40 points. And then there’s the coup de grâce of all bad losses: the home defeat to NET #364. That black mark on your team sheet is worth the full -50 points. On the flip side, losing to the #1 overall team is a totally forgivable sin that earns you 0 points—nothing gained, but nothing lost.
I like this positive points system for wins and negative points system for losses, since it runs on the principle that wins can only help you and losses can only hurt you… though the severity of said help or hurt can (and should) vary greatly.
So, across an entire season of play, we just add these individual game scores together to get your full BRCT résumé score, right? Well, we do, but that’s not where the calculations end. As I mentioned before, it irks me a little bit that our traditional results metrics like SOR and KPI don’t provide much context for what happened inside your wins and losses. This is how you end up with a scenario like last year’s Syracuse team, which had an SOR ranking of 42nd and a KPI of 51st, good enough for an average of 46.5, which is typically considered to be well within range of being in contention for an at-large bid. Yet, there wasn’t a single bracketologist that even had the Orange on the tournament radar. Why? Perhaps it had something to do with Syracuse ranking 84th in NET, 87th in KenPom, and an ungodly 103rd in BPI. By all accounts, Syracuse was not a tournament-level team. The predictive numbers knew that. But the résumé numbers were duped, because Syracuse was able to construct a respectable 20-12 record against a fairly challenging schedule. The issue here is that Syracuse built said record by squeaking past bad teams (12-point win over Canisius, 4-point win over Colgate, 6-point win over Louisville) while also getting blasted off the floor nearly every time it played someone actually tournament-worthy (36-point loss to North Carolina, 20-point loss to Duke, 19-point loss to Gonzaga). That kind of team should not be rewarded with résumé numbers that point toward possibly deserving a tournament bid. So we must take matters into our own hands!
This is where our 50 to -50 scale from the résumé side of things comes in to play. In BRCT, every one of those single-game résumé scores comes with a counterpart for how great or how slim the margin of victory or defeat was in said game. So long as we put a cap on margin of victory at +50 and margin of defeat at -50, then every one of our numbers that we can get for measuring the résumé side (quality of win/loss) and efficiency side (margin of victory/defeat) falls into the same 50 to -50 scale.
Now we can start to tally things up. Let’s do so with a real-life example; we’ll pick Alabama, which ranks second in BRCT, as of the games played through Jan. 15. We can see that the Crimson Tide have accumulated approximately 370 résumé points through 17 games played (against DI opposition) across their 14-3 record, while also putting up an overall scoring margin of +203 (capped at +50 and -50 per game). Good stuff, but we’re not done yet.
You see, that +203 doesn’t tell the entire story. There’s a big difference between racking up that +203 margin against total scrubs and achieving the same margin against an actually difficult schedule. So we need to implement a sort of strength-of-schedule adjustment to the efficiency side of things. (We don’t need to do this on the results side, since it’s already baked into the formula.)
This is where the difficulty factor comes into play. The difficulty factor is essentially BRCT’s version of strength of schedule, which helps to put teams on a level playing field when it comes to evaluating how impressive their efficiency (i.e., overall scoring margin) really is.
The formula for difficulty factor establishes its middle ground at 1, with the most difficult possible schedule being one that always plays the NET #1 team on the road (for a perfect difficulty factor of 2), and the easiest possible schedule being one that always plays the NET #364 team at home (for a perfect(?) difficulty factor of 0).
Again, formulas get a little complex here, but this is what it looks like:
(X+Y)/(Z*50)*2
- X = the total résumé score earned from a team’s wins
- Y = the total résumé score that would have been earned from winning in a team’s losses
- Z = the number of games the team has played
Then when the difficulty factor is determined, we simply multiply this factor by the overall raw scoring margin. (Or divide, if the team’s scoring margin is below zero.)
So let’s get back to our Alabama example and find the Tide’s difficulty factor, which, as of Jan. 15, looks like this:
(383+123)/(17*50)*2
- X = 383, Alabama’s total résumé score accrued from just its 14 wins
- Y = 123, the total résumé score points Alabama would have earned had it come out victorious in its three losses to Purdue, Oregon, and Ole Miss
- Z = 17, the number of games Alabama has played
And it all calculates out to a difficulty factor of 1.19.
So, with that difficulty factor of 1.19—more difficult than the middle point of 1—Alabama’s raw scoring margin of +203 actually increases to an adjusted efficiency score of 241! Pays off to play a tricky schedule.
Now we can finally get to our last set of calculations. We add the total résumé score and the adjusted efficiency score together… but not before multiplying the résumé score by 0.6 and multiplying the efficiency score by 0.2. Why? Well, adding the efficiency portion to BRCT’s calculation is critical to separate it from other résumé metrics like SOR and KPI that don’t give as much weight to such a factor… but at the end of the day, the actual wins and losses do matter more for your tournament case than what happened inside those wins and losses, as we’ve established with the committee’s preference for results metrics over predictive metrics when it comes to selection. So we make this adjustment to accurately reflect that sentiment, so as to not undervalue the results and overvalue the efficiency. In testing, I’ve found that a 3-to-1 ratio between résumé and efficiency produced the best results across the board, so that’s what we’re rolling with.
(And perhaps you’re wondering how I settled on 0.6 and 0.2 as the multipliers, as opposed to other 3-to-1 ratios like 0.75 and 0.25, or 1.5 and 0.5, et cetera, et cetera. Simple! If we use 0.6 and 0.2 as our multipliers, then the maximum BRCT score you can possibly achieve is 500. In this scenario, a team would have its highest possible résumé score, i.e., beating the NET #1 team on the road every game, added alongside its highest possible efficiency score, i.e., beating the NET #1 team on the road by 50 or more points every game, which produces a perfect difficulty factor of 2. I figure this makes more sense for a metric that puts both its résumé and efficiency single-game calculations on a 50 to -50 scale, than using a multiplier like 0.75/0.25, in which the maximum score would be 625. It’s just all for the purpose of making more logical sense.)
Anyway, back on track—after we’ve added these newly-multiplied résumé and efficiency scores together, we now divide this number by the number of games played, in order to put all teams on a level playing field. After all, it simply wouldn’t be fair to those Ivy League teams that only play 25-ish games to have a naturally lower BRCT score than the SEC teams that end up playing 33-ish games, just because they didn’t have as many opportunities out on the court.
Lastly, we multiply it all by 10 to get a nice, big satisfying number. Here’s what the final formula looks like written out:
(X*0.6+Y*0.2)/Z*10
- X = team’s résumé score
- Y = team’s adjusted efficiency score
- Z = number of games played
So let’s plug and play with Alabama one last time. 370 goes into X, 241 goes into Y, and 17 goes into Z for a grand total of (drumroll please)… 159.0! And that is how BRCT works.
It’s important to note this BRCT score is fluid and will change with not just a team’s wins and losses but may even shift on days when the team in question isn’t playing. Alabama’s Jan. 15 BRCT score of 159.0 might go up or down a couple ticks before its next game on Jan. 18 as a result of the NET ranking’s daily updates changing the inherent quality of its prior opponents. The Tide’s 95-90 win over Rutgers on Nov. 27 would surely gain some weight if the Scarlet Knights pull off the upset of Nebraska in Omaha on Thursday, shooting themselves up the NET rankings and thus making Alabama’s win over Rutgers more gaudy. Or maybe Rutgers will get nuked by the Cornhuskers to the tune of a 50-point drubbing, sending the Knights tumbling down the NET and lessening the impact of Alabama’s November victory.
IV. The BRCT Chart
You’ve officially made it through BRCT’s marathon explanation! Now let’s take a look at the full BRCT breakdown for all 364 DI teams, through all 2,861 DI vs. DI games played up through Jan. 15:
As mentioned prior, the main purpose of BRCT is to determine how all 364 DI teams stack up, based on a combination of both résumé and efficiency, in terms of deservingness for an at-large bid to the NCAA Tournament. And the way our formulas shake out ends up creating some pretty nice visual segmentations of tournament-level deservingness. Teams with a BRCT score of 150+ are likely in contention for a 1 seed. Teams with a score of 100+ should be well within the conversation for a protected seed. Teams somewhere in the 100-75 range are most likely looking at a single digit seed. And teams in the 75-50 range represent the heart of the bubble. (These cutoffs, of course, are only based on one year of testing with last season’s final data, but they’re worth keeping an eye on going forward.)
You’ll also notice the impact of BRCT’s résumé/efficiency balance coming into effect. Take Houston, for example, ranking 16th in BRCT currently. The Cougars place higher in BRCT than any team-sheet official results-based metric, sitting at 25th in KPI, 26th in SOR, and 29th in WAB as of this moment. Though all metrics accurately reflect Houston’s lack of any really big, meaty wins (their best, per BRCT’s single-game résumé score, is their hot-off-the-press Jan. 15 win over West Virginia, worth 33 points), BRCT is much more favorable to the Coogs because they’ve so often clobbered their opponents; all five of their Big 12 wins to date have been decided by 13 or more points. Their adjusted efficiency measure is 314, fourth best in the nation, so they receive a fairly significant bump in BRCT’s overall résumé standing, compared to the other results metrics that don’t make such an adjustment. And where do most bracketologists happen to have Houston at the moment? A 4 or 5 seed, hovering right around #16 overall. Point, BRCT.
V. Shortcomings and Strengths
So, is BRCT the perfect metric? Of course not. BRCT is still, after all, a formula, and does not consider elements that humans may, like injuries to key players impacting outcomes, or a game’s location being officially listed as a “neutral site,” even though said site is actually much closer to one team’s area of interest than their opponent. And just relying on the pure margin of victory and defeat as gleaned from a game’s final score can often be deceiving; sometimes teams are able to shape an abominable performance in which they trailed by 30 nearly the entire contest into a mere 10 or 12-point loss by the time the final horn sounds, thanks to the walk-ons coming in during the closing minutes. There’s just no easy way to account for these things in a computer formula at the moment.
But BRCT does much more good than harm. It does away with the arbitrary cutoffs of the quadrants and puts the quality of results on a logical, exponential scale. It factors in key efficiency components that other résumé metrics fail to consider. It puts the NET system that the NCAA emphasizes into an actual, practical use. And, most important of all, it has a cool name. (That’s super important, right?)
Finally, BRCT brings power back to the phrase “every game matters.” It pushes aside the selection committee’s hyper-fixation on a team’s non-conference strength of schedule by placing a numeric value onto every single game played, while also still rewarding teams for scheduling challenging games. It properly evaluates the difference between a win over the NET #1 team and the NET #30 team that the quadrant system fails to adjust for.
If we are truly to believe that “every game matters” like we’re so often told, then we need a metric like BRCT that actually—and accurately—considers the full picture.
VI. Behind the Scenes/The Next Step
NOTE: The following section is outdated as of Jan. 28, 2025. Please see this post, BRCT: An Exciting Update, for the most up-to-date information.
Congrats, you made it to the end! Your reward is a behind-the-scenes glance at how the BRCT chart, which is updated daily and is also available to view at the Bauertology website’s BRCT tab, comes together.
The easy part of maintaining BRCT is making sure that NET numbers and records are up to date. All that’s required is copying-and-pasting from the NCAA’s official NET site once numbers are updated for the day. Those numbers go into this spreadsheet, which then serves as the basis for a number of XLOOKUP functions in the main 1-364 BRCT chart:
The more challenging part is the individual game data. Here’s the spreadsheet where I enter the results of every single Division I game played to date, from which win-loss results, game locations, and scoring margins are scraped into BRCT’s key formulas:
(In addition to the ‘W-Res’ (winner’s résumé points) and ‘L-Res’ (loser’s résumé points) column, you’ll see the ‘L-PPs’ (loser’s possible points) column, which is critical for determining the difficulty factor; this is how we figure out how many total résumé points a team would have earned had it won in its defeats.)
As of right now, there is no easy way for me to automatically pull game data from the NCAA website into this sheet. So, every single day, I manually update it, entering every single result into each row with the winning team, the winning team’s score, the losing team, the losing team’s score, and the site (H for home, N for neutral, and A for away, with the determination for home or away being made in regards to the winning team).
Thankfully, the formulas do the rest, though it’s still a decent amount of work. And with it all being manual data entry from yours truly, it may be prone to human error, such as accidentally typing in the wrong team, score, or location. For example, I first encountered this issue back in November when I noticed that Auburn’s BRCT score was oddly low just days after the Tigers earned their marquee win of the season via neutral-court victory over Houston… I soon realized that I accidentally had that win as being over Houston Christian instead. Humongous difference! So, yeah, it’s entirely possible that there are some typos in that chart somewhere skewing the data. Feel free to comb through the thing yourself, and please let me know if you spot any errors.
I’m hoping to eventually automate this process to avoid the possibility of human fault, while still maintaining the same clean and organized look, so it’s easy to go back through and check the results of individual games. If you have any ideas on how to do this, I’m all ears! You can reach out by emailing me at djbauer1999@gmail.com, or sending me a DM on Twitter @Bauertology.
I’m also hoping to eventually create a system where Bauertology site visitors can interact with the BRCT rankings and organize them according to conference, or individual factors like résumé score, adjusted efficiency score, and difficulty factor, so that it’s not just the same 1-364 BRCT ranking every time with no further interface functionality available.
BRCT is still very much in its infancy, and there’s no guarantee that I won’t make some sort of tweak to the formula in the future. But for now, I’m proud of what I’ve created, and I hope that you’ll share this post and get more people excited about the future of BRCT, too. Thanks for reading!
One thought on “BRCT: A New Way to Measure NCAA Tournament Résumés”