• Art Jahnke

    Senior Contributing Editor

    Art Janke

    Art Jahnke began his career at the Real Paper, a Boston area alternative weekly. He has worked as a writer and editor at Boston Magazine, web editorial director at CXO Media, and executive editor in Marketing & Communications at Boston University, where his work was honored with many awards. Profile

Comments & Discussion

Boston University moderates comments to facilitate an informed, substantive, civil conversation. Abusive, profane, self-promotional, misleading, incoherent or off-topic comments will be rejected. Moderators are staffed during regular business hours (EST) and can only accept comments written in English. Statistics or facts must include a citation or a link to the citation.

There are 3 comments on Are Computer-Aided Decisions Actually Fair?

  1. First of all, I am very grateful to the contributions of these researchers. In fact, the computer algorithm’s bias in the final decision due to incomplete input data is very common in some applications of machine learning. For example, when we are doing classification, a large number of Class A instances are mixed with a small number of Class B instances, and computer algorithms may treat these small amounts of B as noise and cause some minor biases in the final decision. When we connect many systems together, such as the classification system + prediction system + scoring system, each system will produce some very small biases, but these small biases together may produce very large biases. Just like the example of COMPAS in the article. These biases are very unfair. When I encountered a similar problem before, I might look for particularly good data sets, but such data sets are very difficult to find. So when I saw this article, I was very excited and felt that there was another way to solve this biases. Of course, according to the article, this method is not perfect. Therefore, further research is very necessary.

  2. I used to reproduce a credit score prediction project , which used the random forest algorithm to classify those people who may breach of contract.
    I used the dataset from Kaggle . The training data and test set are all from this dataset. And the result seems very good.
    However, someone may ask me about this program whether this data can represent the true condition of different kinds of people. Cause every person only has one record, I realized that there must be something unfair to people if this program was truly applied in banks. This bias seems unavoidable to a program designer.
    However, after read about this article,I ‘m very happy to know that some great researchers are working on it. And this research may lead to a new evaluating indicator in algorithm in the future . It can help people to choose better algorithm in solving different kinds of problems

  3. In fact, I think this kind of bias is really common in our lives, and we’ve seen so much news about this kind of unfairness. While the use of algorithms is important when we are doing machine learning, so is choosing which dataset to use. Choosing an inappropriate dataset can greatly influence our decision-making. I think this study is a breakthrough, we really should pay attention to this problem. It’s not just machines, we humans are also biased against things, and it’s important to optimize machine learning algorithms for that.

Post a comment.

Your email address will not be published. Required fields are marked *