Common Traps When Building AI Systems

Avoiding Common ML Traps

Assessing common mistakes that lead to unintended side effects.

The Framing Trap

Failure to model the entire system over which a social criterion, such as fairness, will be enforced.

For this trap, we will want to look at our outcome variables. Are these variables a proxy of the actual outcome you wish to achieve? What evidence of existing negative bias currently exists with regards to these variables?

The Portability Trap

Failure to understand how repurposing algorithmic solutions designed for one social context

may be misleading, inaccurate, or otherwise do harm when applied to a different context

Here, you will want to be fully understanding of the context in which this model is being built for and the context in which you will be using it. There should be clear documentation distributed across the entire team focusing on this. What stakeholders do you expect to have an impact on where this technology will be used. Are they informed?

The Formalism Trap

Failure to account for the full meaning of social concepts such as fairness, which can be

procedural, contextual, and contestable, and cannot be resolved through mathematical formalisms

How does your chosen outcome continue to implement and solve existing procedural "catches" in order to ensure the decision making process is fair? What method of recourse are available for those who are unfairly judged?

The Ripple Effect Trap

Failure to understand how the insertion of technology into an existing social system

changes the behaviors and embedded values of the pre-existing system

How do you think the introduction of this recommendation system will affect your users? What changes in behavior do you intend to change? Can you think of any possible changes in behavior that you don't intend as a result of your software?

The Solutionism Trap

Failure to recognize the possibility that the best solution to a problem may not involve technology

Will this technology elevate social values which can be quantified? Will it devalue those which cannot? What values might take a back seat if this technology is implemented?