Transformation matrix encoding
At the moment the transformation matrices work only for a particular state vector and a particular ordering. These classes should be more extensible, acting more like a sparse matrix so we can easily apply transformations to a wider range of systems. First, ordering should be stored and inferred from tag lists (e.g., list for the conserved vars, list for the primitive vars). Second, right multiplication and dense matrix construction should depend on this ordering.
Third, we need to think about where these transformations are used and how much extensibility we need/want. A good example of a conflict currently is 0-D simulations (conserved vars: rho, rhoe, rhoY) versus simulations with flow (conserved vars: rho, rhoe, rhoY, rhov) where the addition of momentum variables changes the transformation matrix. We could simply write two classes, one that builds a matrix for the momentum-free system and another that incorporates the momentum terms into a larger matrix and uses the smaller momentum-free transform for the rest. I think this is the best approach because we won't be writing very many transformation matrices. An 'additive' approach where the sparse transform is composed piece by piece may be feasible and much more extensible, but far harder to implement.