Machine Learning needs Designers
An old coworker recently asked me about resources for a UX designer to learn about AI, specifically how to prevent bias in machine learning.
Machine learning desperately needs UX.
Additionally, UX could learn a lot from machine learning. The processes data scientists use to traverse and untangle data are beautiful. They can be used without ever touching an algorithm. But I’ll dig into that another time.
Machine Learning needs UX in six fundamental ways:
- Helping to set the scope of the problem
- scrutinizing data sources
- finding orthogonal paths to alternative data
- scaffolding the right data to the right context
- anticipating and designing for edge cases and cascading collapse
- challenge the concept of a user interface and advance it
That’s a lot to unpack. But first, there are a few common misconceptions about machine learning. Correction: thousands of misconceptions. From the outside, machine learning seems like a Byzantine labyrinth. Sometimes I wonder if it’s intentional because it makes it easy to spot outsiders.
For example, the term bias has two distinct meanings. One is bad (we’ll call it Type 1), and the other is good (we’ll call it Type 2). Well sort of.
When the phrase “bias in machine learning” comes up, it usually means the bad kind. And it usually refers to a machine learning model that has been deployed to the real world. It turns out, too late, that it was either trained on insufficient data or implemented in such a way as to enable bias and ruin people’s lives. (Weapons of Math Destruction is a great primer to learn about Type 1 bias.)
Type 2 bias has very little to do with anyone but data scientists. Bias is a term to describe the shape of the data. The data are typically just numbers with no real-world contingency attached to them. Type 2 bias can be good because it is an early form of a pattern. It’s like an opinion. The opinion could be completely wrong, but it’s something to work with.
It would be foolish to assume that these biases have nothing to do with each other, they do. But you must be able to distinguish between Type 1 and Type 2 bias to converse with a data scientist.
The term “bias” in data scientist parlance is a bit unusual. Most ML terms are as dry as dinosaur bones. Terms like P-value, SOTA, minmax fail to evoke any reaction other than a slight bitterness on the tongue. Some of the names have been around for 70 years, supposedly grown in a lab deep from the surface of the earth, cloistered from anything natural. Bias, by contrast, makes metaphorical sense. But given how much trouble it’s caused, the hermetic approach may have been the wiser. (Personally, I wouldn’t mind it if Type 2 bias graciously accepted a different moniker like “inflection” or “preference” - but that could create even more semantic confusion down the road.)
But fear not! Nearly all the complex names describe things we do quite intuitively. A large percentage of them are just ways to clean up a mess. So all that is needed is to match the intuition with the crazy ass name.
More on that. (to be cont)