Many existing methods of counterfactual explanations ignore the intrinsic relationships between data attributes and thus fail to generate realistic counterfactuals. Moreover, the existing models that account for relationships require domain knowledge, which limits their applicability in complex real-world applications. In this paper, we propose a novel approach to realistic counterfactual explanations that preserve the relationships and minimise experts' interventions. The model directly learns the relationships by a variational auto-encoder with minimal domain knowledge and then learns to perturb the latent space accordingly. We conduct extensive experiments on both synthetic and real-world datasets. The experimental results demonstrate that the proposed model learns relationships from the data and preserves these relationships in generated counterfactuals. In particular, it outperforms other methods in terms of Mahalanobis distance, and the constraint feasibility score.