Fairness Measures of Machine Learning Models in Judicial Penalty Prediction

Yanjun Li,
Huan Huang,
Qiang Geng,
Xinwei Guo,
Yuyu Yuan,


Machine learning (ML) has been widely adopted in many software applications across domains. However, accompanying the outstanding performance, the behaviors of the ML models, which are essentially a kind of black-box software, could be unfair and hard to understand in many cases. In our human-centered society, an unfair decision could potentially damage human value, even causing severe social consequences, especially in decision-critical scenarios such as legal judgment. Although some existing works investigated the ML models in terms of robustness, accuracy, security, privacy, quality, etc., the study on the fairness of ML is still in the early stage. In this paper, we first proposed a set of fairness metrics for ML models from different perspectives. Based on this, we performed a comparative study on the fairness of existing widely used classic ML and deep learning models in the domain of real-world judicial judgments. The experiment results reveal that the current state-of-the-art ML models could still raise concerns for unfair decision-making. The ML models with high accuracy and fairness are urgently demanding.


Fairness, Machine learning, Judgment document, Big data

Citation Format:
Yanjun Li, Huan Huang, Qiang Geng, Xinwei Guo, Yuyu Yuan, "Fairness Measures of Machine Learning Models in Judicial Penalty Prediction," Journal of Internet Technology, vol. 23, no. 5 , pp. 1109-1116, Sep. 2022.


  • There are currently no refbacks.

Published by Executive Committee, Taiwan Academic Network, Ministry of Education, Taipei, Taiwan, R.O.C
JIT Editorial Office, Office of Library and Information Services, National Dong Hwa University
No. 1, Sec. 2, Da Hsueh Rd., Shoufeng, Hualien 974301, Taiwan, R.O.C.
Tel: +886-3-931-7314  E-mail: jit.editorial@gmail.com