Who Said The AI ML Was Fair?
This week we take a look at fairness in A.I. and M.L. and F.A.I.R. data.
This week’s musical inspiration in title and lyrics:
If all is fair in love and war… then what about AI and ML or data? 🤔
To be _fair_, there is a lot of work being done by the word _fair_ this week. Indeed, one could argue that the _fair_ within AI ML concerns of fairness specifically apply when and where a _fare_ is assessed to be paid by the world, humanity, and future generations for their own data. 🧐
FAIR is not a backronym for Frequently Amorphous Inherent Rigamarole. At least, I hope not. 🤓
This week I read, listened, and watched a bit more than usual. Here are my reading 📖, watching 📺, and listening 🎧 suggestions:
- 📖 Developing Trustworthy Software Tools in which Abi Noda digests the PICSE Framework paper from Brittany Johnson-Matthews, Ph.D., Christian Bird, Denae Ford Robinson, Ph.D., Nicole Forsgren, and Tom Zimmermann.
- 📖 Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness in which Michael Kearns, Seth Neel, Aaron Roth, and Zhiwei Steven Wu make the case for a more effective auditing of constraints.
- 📖 Inherent Limitations of AI Fairness in which Maarten Buyl and Tijl De Bie provide a summary of wider AI fairness critiques to inform future research in the pursuit of improving fairness.
- 🎧 Understanding Machine Learning Features and Platforms in which Aaron Delp and Brian Gracely interview Gaetan Castelein.
- 🎧 Making Research Data FAIR in which Christopher Kenneally interviews George Strawn, Barend Mons, Christine Kirkpatrick, Erik Schultes, Francisca Oladipo, and Debora Drucker on the history, present, and future of data based upon Findable Accessible Interoperable Reusable (FAIR) principles vs. confusion with Fully AI Ready (FAIR) data assumptions… as well as FAIR Digital Objects Forum and FAIR in Machine Learning, AI Reproducibility, and AI Readiness (FARR) — soooooo good — this discussion is one I’m saving to my podcast app for re-re-listening.
- 📺 Uncovering the Practices and Opportunities for Cross-functional Collaboration around AI Fairness in which Wesley Hanwen Deng provides a summary of recent work on AI fairness with Nur Yildirim, Monica Chang, Motahhare Eslami, Kenneth Holstein and Michael Madaio.
- 📺 Cloud vs. On-Prem Showdown: The Future Battlefield for Generative AI Dominance in which Dave Vellante breaks down how spending momentum in AI is changing based upon the latest ETR data.
I’m reminded that FAIR principles are less than a decade old. That’s young in IT terms.
As I’ve said before… ethics and empathy are needed.
the on again off again blog of Jay Cuthrell and Fudge Sunday weekly newsletter
Dystopian fever dreams aside, it can be useful to imagine a system of perverse market incentives that drive opaquely enriched data to become more closed, more proprietary, more paywalled, and more about short term extraction than balanced long term local enlightenment as well as global enlightenment. Indeed, we must continuously balance market demands with the pursuit of science, applied technology, and our evolving human values.
Seeking fairness in AI/ML pursuits and FAIR data frameworks could easily be part of how we govern ourselves in the future — or not. Without both, it is not clear how auditing would be possible and could reduce terms such as transparency to being little more than an early 2000s era platitude that exited the zeitgeist almost as soon as it entered.
So, what will be the next big thing in AI/ML fairness and FAIR data?
Until then… Place your bets!
As a reminder, after a +25 year walkabout, I’m an IBMer (again). For 2023, in “Work Plug”, I share a new link each week that is educational, accessible, and relevant to platform engineering from fellow IBMers in the wider IBM Community.
I am linking to my disclosure.