π A big thank you to our new sponsor, NexusTek! π
β¬ οΈ Are You Gonna Go Parquet π§ Cyber Ground β‘οΈ
Who Said The AI ML Was Fair?
This week we take a look at fairness in A.I. and M.L. and F.A.I.R. data.
This weekβs musical inspiration in title and lyrics:
https://open.spotify.com/track/6me09fq5f0q9l132jjWQM4?si=52a272c153dd4ced
Getting Informed
If all is fair in love and warβ¦ then what about AI and ML or data? π€
To be _fair_, there is a lot of work being done by the word _fair_ this week. Indeed, one could argue that the _fair_ within AI ML concerns of fairness specifically apply when and where a _fare_ is assessed to be paid by the world, humanity, and future generations for their own data. π§
FAIR is not a backronym for Frequently Amorphous Inherent Rigamarole. At least, I hope not. π€
This week I read, listened, and watched a bit more than usual. Here are my reading π, watching πΊ, and listening π§ suggestions:
- π Developing Trustworthy Software Tools in which Abi Noda digests the PICSE Framework paper from Brittany Johnson-Matthews, Ph.D., Christian Bird, Denae Ford Robinson, Ph.D., Nicole Forsgren, and Tom Zimmermann.
- π Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness in which Michael Kearns, Seth Neel, Aaron Roth, and Zhiwei Steven Wu make the case for a more effective auditing of constraints.
- π Inherent Limitations of AI Fairness in which Maarten Buyl and Tijl De Bie provide a summary of wider AI fairness critiques to inform future research in the pursuit of improving fairness.
- π§ Understanding Machine Learning Features and Platforms in which Aaron Delp and Brian Gracely interview Gaetan Castelein.
- π§ Making Research Data FAIR in which Christopher Kenneally interviews George Strawn, Barend Mons, Christine Kirkpatrick, Erik Schultes, Francisca Oladipo, and Debora Drucker on the history, present, and future of data based upon Findable Accessible Interoperable Reusable (FAIR) principles vs. confusion with Fully AI Ready (FAIR) data assumptionsβ¦ as well as FAIR Digital Objects Forum and FAIR in Machine Learning, AI Reproducibility, and AI Readiness (FARR) β soooooo good β this discussion is one Iβm saving to my podcast app for re-re-listening.
- πΊ Uncovering the Practices and Opportunities for Cross-functional Collaboration around AI Fairness in which Wesley Hanwen Deng provides a summary of recent work on AI fairness with Nur Yildirim, Monica Chang, Motahhare Eslami, Kenneth Holstein and Michael Madaio.
- πΊ Cloud vs. On-Prem Showdown: The Future Battlefield for Generative AI Dominance in which Dave Vellante breaks down how spending momentum in AI is changing based upon the latest ETR data.
The sun is free enough πΆ
Iβm reminded that FAIR principles are less than a decade old. Thatβs young in IT terms.
As Iβve said beforeβ¦ ethics and empathy are needed.
https://fudge.org/archive/esteem-is-stem-plus-ethics-plus-empathy/
But if they can, theyβll find a way πΆ
Dystopian fever dreams aside, it can be useful to imagine a system of perverse market incentives that drive opaquely enriched data to become more closed, more proprietary, more paywalled, and more about short term extraction than balanced long term local enlightenment as well as global enlightenment. Indeed, we must continuously balance market demands with the pursuit of science, applied technology, and our evolving human values.
Seeking fairness in AI/ML pursuits and FAIR data frameworks could easily be part of how we govern ourselves in the future β or not. Without both, it is not clear how auditing would be possible and could reduce terms such as transparency to being little more than an early 2000s era platitude that exited the zeitgeist almost as soon as it entered.
So, what will be the next big thing in AI/ML fairness and FAIR data?
Until then⦠Place your bets!
Disclosure
I am linking to my disclosure.
π€