Publications

You can also find my articles on my Google Scholar profile.

Conference Papers


FedQV: Leveraging Quadratic Voting in Federated Learning

Published in ACM SIGMETRICS/IFIP PERFORMANCE, 2024

In this paper, we propose FEDQV, a novel aggregation algorithm built upon the quadratic voting scheme, recently proposed as a better alternative to 1p1v-based elections. Our theoretical analysis establishes that FEDQV is a truthful mechanism in which bidding according to one’s true valuation is a dominant strategy that achieves a convergence rate that matches those of state-of-the-art methods. Furthermore, our empirical analysis using multiple real-world datasets validates the superior performance of FEDQV against poisoning attacks. It also shows that combining FEDQV with unequal voting “budgets” according to a reputation score increases its performance benefits even further. Finally, we show that FEDQV can be easily combined with Byzantine-robust privacy-preserving mechanisms to enhance its robustness against both poisoning and privacy attacks.

Code Here

Securing Federated Sensitive Topic Classification against Poisoning Attacks

Published in 30th Annual Network and Distributed System Security Symposium(NDSS), 2023

We present a Federated Learning (FL) based solution for building a distributed classifier capable of detecting URLs containing GDPR-sensitive content related to categories such as health, sexual preference, political beliefs, etc. Although such a classifier addresses the limitations of previous offline/centralised classifiers,it is still vulnerable to poisoning attacks from malicious users that may attempt to reduce the accuracy for benign users by disseminating faulty model updates. To guard against this, we develop a robust aggregation scheme based on subjective logic and residual-based attack detection. Employing a combination of theoretical analysis, trace-driven simulation, as well as experimental validation with a prototype and real users, we show that our classifier can detect sensitive content with high accuracy, learn new labels fast, and remain robust in view of poisoning attacks from malicious users, as well as imperfect input from non-malicious ones.

Code Here

Journal Articles


PriPrune: Quantifying and Preserving Privacy in Pruned Federated Learning

Published in ACM Transactions on Modeling and Performance Evaluation of Computing Systems, 2024

In this paper, we first characterize the privacy offered by pruning. We establish information-theoretic upper bounds on the information leakage from pruned FL and we experimentally validate them under state-of-the-art privacy attacks across different FL pruning schemes. Second, we introduce PriPruneś a privacy-aware algorithm for pruning in FL. PriPrune uses defense pruning masks, which can be applied locally after any pruning algorithm, and adapts the defense pruning rate to jointly optimize privacy and accuracy. Another key idea in the design of PriPrune is Pseudo-Pruning: it undergoes defense pruning within the local model and only sends the pruned model to the server; while the weights pruned out by defense mask are withheld locally for future local training rather than being removed.

Pre-impact alarm system for fall detection using MEMS sensors and HMM-based SVM classifier

Published in 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2018

Aimed to provide timely assistance after the occurrence of falling down, a pre-fall alarm system was proposed. In order to test the reliability of pre-fall alarm system, eighteen subjects who worn this device on the waist were required to participate in a series of experiments. The acceleration and angular velocity time series extracted from human motion processes were used to described human motion features. HMM-based SVM classifier was used to determine the maximum separation boundary between fall and Activities of Daily Living (ADLs). The proposed device can accurately recognize fall event, achieve additional functions, and have advantages of small size and low power consumption.

Estimation for Partially Linear Errors-in-Variable Models

Published in Chinese Journal of Applied Probability and Statistics, 2018

In this paper, we consider the estimation problem for partially linear models with additive measurement errors in the nonparametric part. Two kinds of estimators are proposed. The first one is an integral moment-based estimator with deconvolution kernel techniques, associated with the strong consistency for the estimator. Another one is a simulation-based estimator to avoid the integrals involved in the integral moment-based estimator. Simulation studies are conducted to examine the performance of the proposed estimators.

Nonlinear measurement errors models subject to partial linear additive distortionv

Published in Brazilian Journal of Probability and Statistics, 2018

We study nonlinear regression models when the response and predictors are unobservable and distorted in a multiplicative fashion by partial linear additive models (PLAM) of some observed confounding variables. After approximating the additive nonparametric components in the PLAM via polynomial splines and calibrating the unobserved response and unobserved predictors, we develop a semi-parametric profile nonlinear least squares procedure to estimate the parameters of interest. The resulting estimators are shown to be asymptotically normal. To construct confidence intervals for the parameters of interest, an empirical likelihood-based statistic is proposed to improve the accuracy of the associated normal approximation. We also show that the empirical likelihood statistic is asymptotically chi-squared. Moreover, a test procedure based on the empirical process is proposed to check whether the parametric regression model is adequate or not. A wild bootstrap procedure is proposed to compute p-values. Simulation studies are conducted to examine the performance of the estimation and testing procedures. The methods are applied to re-analyze real data from a diabetes study for an illustration.

Workshop Papers


Information-Theoretical Bounds on Privacy Leakage in Pruned Federated Learning

Published in ISIT 2024 Workshop on Information-Theoretic Methods for Trustworthy Machine Learning, 2024

In this paper, we investigate for the first time the privacy impact of model pruning in FL. We establish information-theoretic upper bounds on the information leakage from pruned FL and we experimentally validate them under state-of-the-art privacy attacks across different FL pruning schemes. This evaluation provides valuable insights into the choices and parameters that can affect the privacy protection provided by pruning.

Strengthening Privacy in Robust Federated Learning through Secure Aggregation

Published in Workshop on AI Systems with Confidential Computing (AISCC) in conjunction with NDSS, 2024

In this work, we show how to implement SA on top of FEDQV in order to address both poisoning and privacy attacks. We mount several privacy attacks against FEDQV and demonstrate the effectiveness of SA in countering them.