Abstract
Graph representation learning is central for the application of machine learning (ML) models to complex graphs, such as social networks. Ensuring `fair' representations is essential, due to the societal implications and the use of sensitive personal data. In this paper, we demonstrate how the parametrization of the \emph{CrossWalk} algorithm influences the ability to infer a sensitive attributes from node embeddings. By fine-tuning hyperparameters, we show that it is possible to either significantly enhance or obscure the detectability of these attributes. This functionality offers a valuable tool for improving the fairness of ML systems utilizing graph embeddings, making them adaptable to different fairness paradigms.
Original language | English |
---|---|
Publisher | arXiv |
Pages | 1-8 |
Number of pages | 8 |
DOIs | |
Publication status | Published - 29 Jul 2024 |
Keywords
- cs.SI
- cs.CY