A major area of research is biomarker discovery using gene expression data. Such data is huge and often needs to be classified into classes or clustered, using different machine learning techniques, for further analysis. An important preprocessing step is feature selection (FS) and different such methods have been devised. However, applying different FS techniques to the same dataset do not always produce the same results. In this work, the robustness of FS methods will be looked into. Robustness is defined here as the stability of a given gene pool with respect to the data and the FS method used. Our approach is to investigate the resulting feature subset obtained when running diverse FS methods on different gene expression datasets. As a first step, 10 FS methods were executed using 2 different datasets. Based on the results obtained, 2 of these methods were further investigated using 10 different datasets. The effects of selecting an increasing number of features on the percentage similarity inter-methods were also studied. Our results show that the studied methods exhibit a high amount of variability in the resulting feature subset. The selected feature subsets differed both inter-methods and intra-methods for different datasets. The reason behind this is not clear and possible objective assessment on the ideal (best) subset should be further investigated.