We explore and illustrate the use of ranked sparsity via the LASSO, in combination with a series of penalty-ranking functions, in order to impose a dynamically-structured sparse model selection framework that favors models that are better classified and more transparent. Model selection methods for generalized linear models (GLMs) commonly presume that each potential parameter is equally worthy of entering into the final model. We call this rule "covariate equipoise". However, this assumption does not always hold, especially in the presence of derived variables. For instance, when all possible interactions are considered as candidate predictors, the presumption of covariate equipoise will often produce misclassified and opaque models. We suggest a modeling strategy that requires a stronger level of evidence in order to allow certain variables (e.g. interactions) to be selected in the final model. This ranked sparsity paradigm can be implemented with the LASSO, and has broad applicability; we explore its performance in selecting interaction and polynomial effects for GLM frameworks, and in selecting local and seasonal effects for time-series modeling frameworks.