What is the main problem in doing an analysis on the ruleset before the planner gets to work to determine which constraints are reasonable to disregard during the planning process? Does it make programming the ruleset too hard up front? The number of available resources has to be defined anyway right? So the only extra work is providing an estimation of the usage of each resource or an estimation of its availability. If there's 10 taxi in the city you have an impression that you won't be able to find one because more than 10 people use a taxi in an your travel window to get from place to place. In a more cognitively plausible sense we just have an impression that taxi's will be available from past experience.
If it is unreasonable to expect someone to tell us this availability information directly there is still easy work that could be done. There are easy cases that could be taken care of up front. In the blocks world example, if the world has four arms, and you only have 3 blocks, its evident enough that the number of arms aren't a restriction and so this responsibility can be passed on to the scheduler.
In a more realistic project planning example you could create a graphplan for a few levels deep and count the number of actions that are done in parallel on each level and compare this to resource availability to determine which constraints are unnecessary. Using this information you could change the domain on the planner side to be simpler and then proceed using whatever method you wish from there.
So, why is this concept at the forefront of research? There seem to be obvious places to make initial progress, I would think it would be standard in implementations by now.