Radiomics has shown promising results in several medical studies, yet it suffers from a limited discrimination and informative capability as well as a high variation and correlation with the tomographic scanner types, pixel spacing, acquisition protocol, and reconstruction parameters. We propose and compare two methods to transform quantitative image features in order to improve their stability across varying image acquisition parameters while preserving the texture discrimination abilities. In this way, variations in extracted features are representative of true physiopathological tissue changes in the scanned patients. A first approach is based on a two-layer neural network that can learn a nonlinear standardization transformation of various types of features including handcrafted and deep features. Second, domain adversarial training is explored to increase the invariance of the transformed features to the scanner of origin. The generalization of the proposed approach to unseen textures and unseen scanners is demonstrated by a set of experiments using a publicly available computed tomography texture phantom dataset scanned with various imaging devices and parameters.