Value Add 3.0 Introduction – December 30, 2015
Those wanting an easier explanation of the breakthrough Value Add Basketball 3.0 should click here. This post is a more detailed explanation of the new system and how it works. This Value Add Sports post gives background on from the 2011 invention of Value Add Basketball 1.0, and many of the articles written about the system.
The actual ratings of current players appear at www.valueaddbasketball.com, while team ratings are here and ratings for players going back to the 2002-03 season are here.
Those wanting an easier explanation of the breakthrough Value Add Basketball 3.0 should click here. This post is a more detailed explanation of the new system and how it works. This Value Add Sports post gives background on from the 2011 invention of Value Add Basketball 1.0, and many of the articles written about the system.
The actual ratings of current players appear at www.valueaddbasketball.com, while team ratings are here and ratings for players going back to the 2002-03 season are here.
The programmers were about to shoot me before 2015 ended, but luckily more than 1,000 lines of code have finally been tested and calibrated to produce the first true run of Value Add 3.0.
VALUE ADD 3.0. The third version of Value Add Basketball calibrates each player’s stats to measure how much a player improves his team’s Offensive and Defensive Efficiency. A team of “replacement” players would be expected to score about 80 points per 100 trips down the court against an average defense. The 9.99 Offensive Add for LSU’s Ben Simmons indicates that even if all his teammates were replacement-level players, that he would improve the Offensive Efficiency Rating for the team from 80 to 90. His Value Add rating indicates that he would lower an average opponent from 102 (this year’s average) to 98.
The most lopsided team match-up would be University of Virginia’s team of players against Delaware State’s. To the starting point of 80, you add the total of Virginia’s players (36.22) to the Defensive Rating of Delaware State’s Defensive Ratings (11.22) to estimate Virginia’s Offensive Efficiency for that game would be 127 per 100 trips. So if both teams had 70 possessions during the game we would roughly estimate that Virginia should score 89 points.
For Delaware State, add the 80 replacement points to Delaware State’s 4.34 Offensive Rating and with Virginia’s NEGATIVE 10.22 Defensive Rating Delaware State would be estimated to score 74 points in 100 trips, or 52 points. So Virginia would be expected to win 89-52 on a neutral court. You add 2 points to the home team and subtract 2 from the road team, so if Virginia was at home it calculates a 91-50 Virginia win, at Delaware State an 87-54 win.
While Sagarin or Pomeroy will yield slightly more accurate predicted scores – since the sum of the players is not always equal to the overall value of any team – the advantage of this new calibrated Version is that it lets you to know the impact of an injured player or a player returning from injury.
NO SIMMONS COULD HAVE TURNED 11-POINT WIN TO LOSS. As shown above, if Simmons was injured we would expect LSU’s Offensive Efficiency to drop from the current 105.9 points per 100 trips to 95.9 per 100 after taking out Simmons 9.99. We would expect LSU’s defense to give up 103.7 points per 100 trips instead of their current 99.6. LSU took 84 trips down the court in the 119-108 win over North Florida. Simmons combined impact of 14 points per 100 offensive and defensive trips indicates he is worth 12 points to LSU in a game like that with 84 trips down the court – and LSU likely sees the 11-point win turn into a 1-point loss. (in fact, that night he was likely worth a little more as he scored 43 points – but most of the points a player scores would have been made up by other players)
PLAYERS GET 0.0 VALUE ADD WHEN OUT. By the same token, UNC became a much better team when Marcus Paige returned after missing the first six games. If you look at his offensive rating right now it is 4.70 per game. However, it is important to look under the “Year” column to see that he has only played in 7 of the team’s 13 games. Since he had a 0.0 Value Add in the six games UNC played while he was still injured, quick math tells us he has actually been worth an average of 8.73 offensive ratings points in his seven games. At the risk of making Northern Iowa fans angry, that rating in the 66 trips up the court against Northern Iowa adds an estimated 5.76 points to UNC’s score that night (8.73 per 100 trips times 66/100 trips that game) and gives UNC a 73-71 win instead of the actual 67-71 upset by Northern Iowa.
Several adjustments were part of the new system – some in response to specific criticisms after the mainly warm reception by NBA teams and sports media to the first system and others due to ongoing research that identified inaccuracies.
STEALS WORTH 10.5% LESS. The initial Value Add equations assumed a player who caused steals also created additional turnovers by jumping in lanes. While steals are still a huge factor and underestimated by most fans, analysis over the years indicated we overestimated this impact and that in fact the number of “non-steal turnovers” a team forces does not appear impacted by the turnovers that ARE the result of steals (e.g. a team that plays sound defense without going for steals likely forces as many travels, shot clock violations etc. as a team jumping in the lane). As a result, Value Add 3.0 reduces the impact of a steal to 89.5% of the value under the initial system. In effect, all five players on the court at any given moment share most of the credit for each non-steal turnover whether than most of the credit going to players with more steals.
DEFENSIVE STOPS. We were able to more accurately measure the “other” defensive stops after removing the impact of defensive rebounds, blocked shots and steals and in quality control the overall defensive Value Add figures are much closer to each team’s defensive ratings.
DEFENSIVE MINUTES ALLOCATED (DOUG MCDERMOTT RULE). In allocating defensive stops, a factor is now built in which assumes the coach of a weaker defensive team is typically playing better defenders more – all other things being equal. Under the old formula Doug McDermott is the only player to have an OFFENSIVE RATING of over 7.0 during three seasons (JJ Redick is the only other player to do it even twice). However, because McDermott played at least 79% of the minutes every year and the Creighton defenses were poor the original system assumed he was giving up a big percentage of the buckets scored, but Version 3 lessens the impact of playing more minutes for a bad team to give the bigger penalty to players who get in the game less often on a bad defensive team.
GAMES PLAYED. Denzel Valentine’s 12-game Value Add is 13.49, but his value in the 13th game against Oakland – the first game missed with his injury, is 0.00 so his overall average falls to 12.45 to drop him from 4th to 11th place. The reverse is true in the Marcus Paige example above – a returning player is worth more per game once back from injury.
POSITION ADJUSTMENT. Version 3.0 sets a value so that the average D1 guard will always equal a 2.0 Value Add and the average front line player will also equal 2.0. When we update past years based on the new system the totals to guards raw scores will get a large boost because hand checking made it so much harder for guards. However, this season freedom of movement rules have helped guards get much better stats and thus they need no adjustment (SG*1), whereas the front line players actually needed to be increased by 1.5% (PF*1.015).
HIGH-, MID- AND LOW-MAJORS. High Major players do not play the full game in early season matchups against Low Major teams – so during conference play starts up the high major players gain about 2.0. To avoid the early season distortion all high major players receive an extra point (High+1) while Low Major players lose a point (Low-1) and Mid-Major players stay the same. This adjustment is phased out over the next two months.
TEAM VALUE ADD. Value Add is primarily a rating of individual players, but Version 3.0 yields individual Value Adds to tally more closely to the Efficiency Ratings of entire teams. If you go through the top Value Add teams all are within a few spots of where they would rank at www.kenpom.com until you get to Princeton – who for some reason is 40th based on their Value Add players but not even in the top 100 at www.kenpom.com. For some reason Harvard is the next team (88th in Value Add) that is much lower at www.kenpom.com, so maybe those smart Ivy Leaguers can figure out why Value Add seems to like their players.