As organizations race to deploy AI copilots, many are discovering that the real challenge isn’t the technology itself, but how it reshapes human judgment, accountability, and trust inside the enterprise. In this Insights@Questrom piece, Gerry Tsoukalas, Associate Professor of Information Systems explores why some organizations extract real value from AI copilots while others struggle, arguing that success depends less on the model and more on how work, decision rights, and governance are redesigned around human-AI collaboration.
What distinguishes organizations that successfully integrate AI copilots into daily work from those that struggle to gain real value from them?
From what I’ve seen so far, the difference often isn’t the model itself. It’s whether the organization treats copilots as a simple add-on, or as a reason to rethink the workflow.
The teams that seem to get value faster tend to be very explicit about decision rights: When is the AI only advising? When can it execute within guardrails? When does a human have to step in? When those boundaries aren’t clear, the copilot can end up producing suggestions that don’t fit the real constraints of the job, and people gradually tune it out.
One practical illustration is how some financial advisors use AI not as a replacement for judgment, but as workflow support. For example, Morgan Stanley’s AI-powered Debrief tool generates meeting notes, surfaces action items, and drafts follow-up emails after client meetings, while advisors remain accountable for editing and sending them. This compresses routine work while keeping the human in control of client communication.1
How do AI copilots change the nature of decision-making inside organizations, particularly around judgment, accountability, and trust between humans and algorithms?
One thing copilots seem to do is introduce a new kind of ambiguity around accountability. It’s not that responsibility disappears, but it can become harder to see what influenced the decision when humans and AI are collaborating.
In practice, judgment becomes more blended: the AI contributes speed and pattern recognition, while humans bring context, constraints, and tradeoffs the system may not reliably represent. The organizations that appear to handle this best often build lightweight traceability, so for higher-impact decisions you can reconstruct: what the AI recommended, what the human changed, and why.
Trust also becomes more situational. People don’t trust copilots in the abstract. They tend to trust them in the specific contexts where they’ve seen them perform well, and distrust them in the contexts where they’ve seen them fail.
What are the most common organizational frictions or unintended consequences you see emerge when AI tools are introduced at scale?
A common friction I’ve noticed is that some mid-level experts feel their expertise is being compressed or devalued. That perception isn’t always wrong, and it matters because those experts are also the people you most need involved to validate and improve AI outputs. You can see this show up as a broader mismatch between leadership enthusiasm and employee sentiment. For instance, an HBR survey reported that while 76% of executives believed employees were enthusiastic about AI adoption, only 31% of individual contributors said they were.2
Another issue is talent development. A lot of organizations historically trained juniors through repetition: doing routine work, observing seniors, and gradually building judgment. If copilots take over too much of that routine layer, you can end up with employees who get good at editing AI output without building deeper reasoning skills as quickly. This is part of why some reporting argues companies are investing heavily in AI while underinvesting in the training and change management employees need to use it effectively. 3
And then there’s a slower risk that’s harder to see in the moment: capability drift. Over time, teams may lose certain skills without realizing it, until they hit a novel situation where the AI struggles and the humans can’t easily step back in. That concern is starting to enter the mainstream, with Business Insider reporting that many leaders explicitly worry about “skill atrophy” from overreliance on workplace AI tools.4
In your view, how should leaders think about redefining roles and responsibilities when employees are working alongside AI systems rather than simply using them as tools?
One emerging pattern is that leaders get more traction when they stop treating AI as a layer on top of roles and instead redefine roles around human decision-making and judgment. McKinsey’s Agents, Robots, and Us report describes how work is evolving into partnerships between people and AI — with humans focusing on framing problems, guiding AI, interpreting results, and making judgment calls.5
Practically, that means moving from job descriptions like “analyze, draft, summarize” to decision charters:
- What decisions are you accountable for?
- Which constraints must you enforce?
- When do you escalate?
- If you override the AI, what reasoning needs to be documented?
A high-profile example of this mindset is Moderna’s structural reorganization, where the company merged its HR and technology functions under a single Chief People and Digital Technology Officer.6 The goal — as reported by The Wall Street Journal — is to align talent strategy and technology strategy in an AI-augmented enterprise, rethinking who does what work rather than merely automating existing tasks.
And performance evaluation needs to follow. If copilots are doing much of the execution work, it often makes more sense to assess people on judgment quality, exception handling, and decision outcomes over time, not just throughput.
What governance practices or guardrails are essential to ensure AI copilots enhance performance without undermining transparency, fairness, or employee confidence?
The governance approaches that seem to work best so far are the ones that are built for learning, not just compliance.
For higher-impact decisions, organizations often benefit from some form of decision provenance: what the AI recommended, what the human changed, and what happened next. Without that, it’s difficult to audit outcomes or improve the system.
It also helps to have contestability mechanisms that are practical, meaning a real review path when someone wants to challenge an AI-informed decision.
And on fairness, one emerging lesson is that you can’t only audit the model. You also have to audit the collaboration. Bias can show up in how humans accept, reject, or modify AI recommendations across different groups, even if the model itself looks “neutral” on paper.
How can organizations use data and analytics to continuously monitor whether human-AI collaboration is improving outcomes, or quietly introducing new risks?
One challenge is that headline metrics like “speed” or “throughput” can look great while hidden risks build underneath. So organizations often need monitoring that separates outcomes from decision quality.
In software engineering, the productivity impact of AI tools has been measured empirically. In controlled experiments, developers using GitHub Copilot completed defined coding tasks substantially faster than those without AI assistance, suggesting measurable gains in productivity for routine programming activity. That kind of measurable change is exactly the pattern analytics teams can look for: not just whether output increased, but how and in which contexts.7
What skills or mindsets do employees need to develop to work effectively with AI copilots, and how should organizations support that transition?
If I had to pick one skill that seems increasingly important, it’s judgment calibration: knowing when to trust the AI, when to challenge it, and when to override it.
A second skill is learning to make tacit judgment explicit. People often override with “this feels wrong,” and that might be correct. But it’s much more useful when employees can articulate: This violates a constraint, this is a novel case, this tradeoff matters here.
Looking ahead, how do you see human-AI collaboration reshaping organizational design and leadership expectations over the next few years?
It’s still early, but one tension is already emerging: copilots can enable both centralization and decentralization, and organizations will drift toward whichever is easier to govern.
On the centralization side, copilots make it easier to standardize work and enforce consistency, which is valuable in regulated or high-risk settings. The downside is brittleness: decision-making can concentrate in the small group that owns the tools and guardrails.
On the decentralization side, copilots lower the cost of judgment at the edges. A recent example from customer service illustrates both forces: some companies deployed AI assistants to handle large volumes of routine queries, significantly reducing response times and scaling support. But experience from others (e.g., where initial AI-only strategies were scaled back in favor of humans plus AI working together) shows the need for clear escalation paths and skilled humans on complex or sensitive issues.8
That shifts leadership from “supervising work” to designing decision systems: what AI can do, when humans must intervene, what triggers escalation, and how overrides become opportunities for learning. The winners likely won’t be the firms that deploy AI first, but the ones that improve the human-AI system in a sustainable way, while preserving the human skills they’ll still need when the model breaks or the world changes.
References
https://www.morganstanley.com/press-releases/ai-at-morgan-stanley-debrief-launch
https://hbr.org/2025/11/leaders-assume-employees-are-excited-about-ai-theyre-wrong
https://hbr.org/2025/11/leaders-assume-employees-are-excited-about-ai-theyre-wrong
https://www.businessinsider.com/leaders-worry-about-skill-atrophy-due-to-ai-adoption-2025-10
https://www.mckinsey.com/mgi/our-research/agents-robots-and-us-skill-partnerships-in-the-age-of-ai
https://www.wsj.com/articles/why-moderna-merged-its-tech-and-hr-departments-95318c2a
https://arxiv.org/abs/2302.06590
https://www.eesel.ai/blog/companies-that-use-ai-chatbots-for-customer-service





















