Basis Global today announced a strategic partnership with AnswerRocket, an enterprise AI solutions consultancy, to redesign how market research insights are created and delivered to clients. Most AI adoption in research has optimized for efficiency rather than better outcomes, producing faster surveys, quicker summaries, and dashboards that still leave teams debating what to trust and what to leave out. For years, the industry has treated the tradeoff between depth, speed, and practicality as inevitable. This partnership takes a different approach: rethinking how market research could be improved using AI, so that tradeoff no longer holds.
First Initiative: A New Researcher + AI Approach for Brand Tracking
The partnership’s first initiative introduces a new Researcher + AI approach to brand tracking at Basis Global. Brand tracking datasets have grown so large and complex that no research team could realistically explore every dimension of the data manually. AI makes it possible to systematically analyze the full dataset, testing hundreds of hypotheses across markets, audiences, and time periods to uncover patterns that would otherwise go undetected. Researchers remain at the center of the process, designing the research framework and translating those findings into clear strategic guidance. Basis calls this combination of AI-powered scale and human judgment Integrated Intelligence. The approach also lays the foundation for a connected data ecosystem combining survey, social, and search signals to create a more complete view of brand performance.
For clients, the difference shows up in the work itself. Guided by the researcher, the AI develops a comprehensive analysis plan and systematically evaluates the data, with each insight verified against the evidence for accuracy. What clients receive is a more complete understanding of their brand, backed by traceable data, and delivered as actionable guidance from senior researchers who know their business.
An Innovation Roadmap Shaped by Industry Needs
Brand tracking is the first application in the partnership’s innovation roadmap. Basis Global and AnswerRocket will convene client roundtables bringing together research and insights leaders to examine where AI is creating real value, where skepticism remains, and what the industry needs next. For participants, that means a curated peer group, early access to innovations before they go to market, and a direct voice in the partnership’s development priorities.
To learn more about the partnership, visit https://basisglobal.co/news-and-awards/basis-answerrocket-partnership.
Meta AI agent incident exposes deeper agentic security gap
Posted in Commentary with tags Facebook on March 21, 2026 by itnerdA recent incident at Meta shows how an AI agent provided guidance that led an engineer to unintentionally expose a large amount of sensitive internal data to employees for a short period of time.
While Meta confirmed the issue was contained and no external data was mishandled, the episode highlights a broader risk as AI agents become embedded in engineering workflows. These systems aren’t just generating suggestions, they’re influencing real actions inside environments that handle sensitive data.
Gidi Cohen, CEO & Co-founder, Bonfy.AI
“Meta’s incident is exactly what happens when you let agents loose on sensitive data without any real data-centric guardrails. This wasn’t some exotic AGI failure, it was a very simple pattern: an engineer asked an internal agent for help, the agent produced a “reasonable” plan, and that plan quietly exposed a huge amount of internal and user data to people who were never supposed to see it.
The problem is that neither the engineer nor the agent had any persistent notion of “who actually should see this data” beyond whatever happened to sit in a narrow context window at that moment. Traditional controls don’t help much here. Endpoint DLP, CASB, browser controls, even basic role-based permissions, none of them are watching the actual content as it moves through an agent’s reasoning steps and tool calls, especially when the agent is running as a system service in some framework.
Our view is simple: treat agents like very fast, very forgetful junior interns and make the data security layer smart enough to compensate. That means three things: constrain what data is even available to the agent via contextual labeling and grounding; give the agent a Bonfy MCP tool it can call inline to ask “is this safe to use or send in this context?” before it takes an action; and inspect what ultimately comes out of the workflow before it lands in email, chat, dashboards, or internal portals. In a Meta-style scenario, those controls would have either prevented the broad internal exposure entirely or at least shrunk the blast radius to something manageable.
As organizations “experiment at scale” with agents, the only sustainable path is to make agents first-class entities in the risk model and put the intelligence where it belongs: on the data that’s being read, composed, and shared, not just on the configuration screens of yet another AI tool.”
The thing is that when you expose AI anything to sensitive data, it can get out there. Samsung banned AI usage for that reason. Keep that in mind if you’re an organization that uses AI
Leave a comment »