Don't tally all resistance for the rail, but node-by-node.
I notice in tweeds the rail series resistance is around 20k, where it is lower in many blackface designs (maybe 5.7k)...does this difference significantly impact how the amp responds? I figure more R would give better stage decoupling, but what's the trade-off?
D.C. voltage drop. If current draw is high, whether because of a stage's idle current or signal current, then voltage drop is higher.
Start at the filter cap furthest from the rectifier (usually the cap feeding the input gain stage; call it "Cap 1"). Add up the currents of each individual stage attached to that cap. This total current will be drawn through the decoupling resistor feeding this last-cap from the cap next-closer to the rectifier (call this "Cap 2"). Because the idle draw of most gain stages is under 1mA, and dynamic currents are pretty small, the resistor could be quite large.
Now look at the filter cap next-furthest from the rectifier ("Cap 2"). A decoupling resistor feeds it from a higher-voltage B+ point. That resistor has current drawn through it to feed all the stages connected to Cap 2, plus all the current from Cap 1's stages. The decoupling resistor upstream from this is generally smaller than the one attached to Cap 1, because the greater total current draw means a greater d.c. voltage drop.
So the resistors are sized not only to provide isolation between filter caps (no-resistance or too-little resistance leads to motorboating or oscillation through the power supply), not only to increase filter effectiveness, but also to trim the available d.c. supply voltage as necessary. Generally, the input stages work fine with a B+ much lower than the output tube plates, but also needs a cleaner d.c. as their output will be amplified by the rest of the amp.
Up to this point, I've been talking about preamp tubes, which are running class A and with pretty small currents. Their average current with signal applied doesn't deviate very far from their idle current draw. Output tubes are different. With applied signal and producing maximum power output, an output tube screen might have a current which rises from less than 10mA total for a pair of tubes to 10-20mA or more during maximum power output (the exact amount of increased average current depends on tube type and mode of operation).
The extra screen current during maximum power output could create a voltage drop across a large resistor in place between the plate and screen filter caps. This would reduce the screen voltage (and preamp voltage, to an extent). If screen voltage drops, plate current drops as well. So a very large resistance in between the output tube screens and the PT/rectifier
could create a drop in attainable maximum power output from the power section. For this to be significant, both the average screen current under max output conditions and the series resistance would need to be "large". Exactly how large (and how significant the impact is) is a case-by-case determination, which also depends on how close you have to get to the theoretical maximum output power of your power amp.
Screen current variations tend to be larger when supply voltages are bigger, and when the tube is being run deep in class AB (as compared to same-tube with lower supply voltage operating in class A). So then you have a greater potential loss of output power due to power supply resistance (and mostly the resistance upstream of the screen). So that's one of the reasons you tend to see larger blackface Fender's use a screen choke instead of a resistor, as compared to the smaller amps or smaller tweed amps.
- The choke gives better filtering
- The choke keeps the screen voltage higher under max signal conditions, which allows higher peak plate current and higher maximum power output
- Point 2 above sounds less "saggy" or constricted compared to using a resistor feeding the screen of a high-current output stage