how ai exposes poorly defined human systems
originally posted to linkedin
the biggest risk in enterprise AI? exposing how poorly defined our human systems actually are.
when i talk to real AI practitioners about what holds back their initiatives, they don’t talk about what you might expect.
not the conceptual risk of targeting novel use cases. not the execution risk of scaling a new technology.
instead, they talk about how hard it is to translate informal, socially enforced human decision-making into formally specified and documented AI systems.
think about trying to align your organization around how to prioritize and resource a portfolio of projects. are the objectively most promising projects prioritized, or the highest paid person in the room’s pet project?
that’s because many human systems rely on informality, ambiguity, and implicit authority. they work because judgment flows through trust, relationships, and social context.
AI doesn’t operate that way.
to make AI work in these environments, decision-making has to be made explicit. Rules, authority, and accountability need to be formalized.
sometimes that’s possible. sometimes it isn’t.
so the next time an AI initiative fails, don’t just ask whether the technology was lacking.
ask whether the human system it was built on was ever clearly defined in the first place.
interested in working together? let's talk