Here’s a Q&A with Aditya Ganjam, Co-founder of Conviva human-AI collaboration. This is something that I usually don’t do. But I thought I would give it a shot to see if I get a good response. Please leave a comment and let me know what you think of this.
In what ways might human-AI collaboration move past simple automation to actively shape and guide strategic business choices?
AI agents are shifting from task runners to partners in decision-making. To achieve this potential, organizations must measure real experience and outcomes, not just accurate responses. By continuously analyzing every interaction and linking consumer behavioral patterns—for example, movement from agents to apps, and websites; long pauses; abandonment—to results like purchases, bookings, or resolution, teams can objectively measure agent effectiveness from the human’s perspective. This approach exposes friction, inefficiencies, and confusion, or where the agent helps or hurts. Furthermore, it can create a virtuous improvement cycle whereby the outcomes continuously sharpen prompts and tools. That outcome loop turns agents into engines of strategic insight that drive growth, reliability, and trust.
Which often overlooked human abilities will become increasingly valuable as AI integrates into the workplace?
Curiosity with rigor is the superpower. People who design experiments, test assumptions, and learn fast will create outsized value. As agents take over routine execution, humans must become designers of discovery. We will see significant value placed on those willing and confident enough to test boundaries, question assumptions, and learn from failure at speed. The winners will make experimentation a habit, treating every failure as data that sharpens both the product and the agent.
What steps can leaders take to maintain clear accountability and openness as AI is adopted into daily business processes?
Manage AI by outcomes, not outputs. Define success as consistent, efficient achievement of business results, like add-to-cart, purchase, booking, and resolution, and track it with client-side telemetry that reflects experience and engagement from the consumer’s perspective. Pair this with explainability (what the agent did and why) and continuous feedback loops that refine prompts, tools, and policies. Keep human oversight for ethics and alignment, but let data drive iterative improvement.
Which ethical standards and governance structures should organizations establish today to effectively manage AI agents by 2027?
Enterprises should formalize human-in-the-loop governance as their first safeguard with outcome-based metrics as the central focus of agent performance. Require real-time monitoring of agent behavior “in the wild,” tying actions to consumer experience and measurable results, not just model-level accuracy. Mandate traceability for critical decisions, bias checks, and rollback paths, and institutionalize continuous learning so fixes and improvements flow back into prompts, tools, and safeguards. This makes systems provable, auditable, and resilient.
What’s the most important mindset or cultural transformation companies need to make to harness the full potential of human-driven AI?
Move from fear to evidence-driven curiosity. Encourage teams to co-work with agents, instrument experiences end-to-end, and act on what the data shows about outcomes. When people see how agents improve resolution, speed, and satisfaction, and where they don’t, they focus on higher-value work while systematically tuning the rest. That’s how organizations convert AI from novelty into predictable business performance.
GenAI boosts productivity by nearly 4 hours a week but gains are highly uneven: Nexthink
Posted in Commentary with tags Nexthink on February 9, 2026 by itnerdNew research from Nexthink, the global leader in Digital Employee Experience (DEX) management, reveals that users[1] of Generative AI (GenAI) tools save a net average of 3 hours and 47 minutes per week.[2] However, the analysis finds that there are huge discrepancies between the four market-leading tools, with ChatGPT boosting productivity by more than double that of Copilot.
The analysis, based on 4.9m sessions per day across 3.4m employees, also finds users tend to engage with GenAI 10 times per day, for a total of three hours and fourteen minutes per week on average.[4] However, there are significant numbers of users who are yet to engage with any of the Big Four tools.
While businesses have been quick to embrace GenAI, a lack of visibility around which tools are being used, by whom, and for what purposes, has been a significant problem in understanding the value they are getting from these investments. Nexthink AI Drive solves this problem by consolidating visibility, usage, guidance, and measurement data into a single vantage point. Combining this robust DEX data with user sentiment analysis, it uncovers employee pain points and adoption barriers, enabling organizations to provide better adoption support and employees to gain confidence faster.
To find out more about GenAI adoption or to discover how such tools are being used in your organization, please visit Nexthink’s AI Activation Playbook.
[1] Employees who log in at least once a week to any GenAI tool
2 Based on self-reporting from 5,000 end users between 30th October – January 29th of estimated time saved when using GenAI tools.
3 Overall averages reflect usage across all GenAI tools observed and are weighted by real-world usage levels (the four tools shown are the most used, but not used equally)
4 Data collected between 30th October 2025 – 29th January 2026 from Nexthink products AI Drive and AppEx. Data has been collected as a benchmark of tools across organizations. As such, in-house tools use has not been included in the analysis. All other product names, logos, brands, and other trademarks included in this release are the property of their respective trademark holders, and use of them does not imply any affiliation with or endorsement by them.
Leave a comment »