Perspectives from AI-enabled product and Agile Leader Bob Fischer
Over the past couple of years, I’ve started using AI as a thinking partner. It’s valuable in several aspects of my work and personal life, but one place it’s been especially useful is investing.
Not to tell me what to buy, but to sharpen my reasoning—to pressure-test assumptions, force clearer decisions, and ask questions I should be asking myself: Why do I own this? What would make me sell? How much risk am I actually taking?
The AI’s coaching was helpful, but the setup required for each coaching “session” was frustrating.
That’s because each session meant exporting data, cleaning spreadsheets, uploading files, and re-explaining what had changed. Twenty minutes of prep just to start a real conversation.
Eventually, it became clear that the problem wasn’t prompting or memory. The problem was that my data lived in one system, while the AI lived somewhere else, meaning every conversation started from scratch. Solving that problem led me to two much more valuable realizations:
Despite spending much of my career in the financial services sector, I’ve never really been an investor in the truest sense. I invested, sure, but like most people, I dumped money into diversified funds and occasionally bought individual stocks with no real framework or rubric in place. Case in point: I sold Nvidia in 2023.
AI, I reasoned, could lead me to make better decisions. So, I exported, cleaned, and explained. Then I did it again. And again. Soon, it became clear that this manual process wasn’t the path to a workable AI investing coach. If this idea was going to work, I’d have to find a way to avoid paying this “context tax” every time I wanted to have a conversation with my new AI investment coach.
This struggle will be familiar to almost any team using AI today: Copying information out of Jira, dashboards, or spreadsheets, and pasting it into chat to get insights. This “context tax” adds friction before the real work even begins, and it’s one of the chasms any AI solution will have to cross before it becomes truly valuable to a team or organization.
To avoid paying this “context tax” by feeding my data into AI, I flipped the model and brought AI into the application. I built a small app to serve as the system of record for everything, including live portfolio data, rules, research notes, and computed summaries. Then I embedded an AI assistant directly into the interface.
The difference is how the context is assembled. Each time I select something, a single position or a summary view, the app dynamically rebuilds the AI prompt from live data:
With this context in place, the AI isn’t starting from a blank chat, but with situational awareness. Instead of asking, “Here’s my spreadsheet, what do you think?” I can instead ask, “Given all of this, what should I reconsider?”
And it answers—immediately—with specifics.
These chat responses have value, but they also have some frustrating limitations. You read them, interpret them, and then manually update your system. So I stopped treating the AI like a conversational partner and started treating it like part of the workflow.
When the AI suggests something, like tightening a rule, adjusting a threshold, or updating a thesis, it returns structured output in the form of JSON files, instead of just prose. The app renders this output as actionable “Apply” cards. Click once, and the change is written directly into the app’s system of record.
No copying. No pasting. No translation step.
Insights turn directly into action. That’s when it stopped feeling like a chatbot and started feeling like a real coach.
This wasn’t a toy or a simple prototype. The app is a mid-sized Python codebase (around 18,000 lines, nearly half of which are test cases) with a clean, layered architecture:
Conversation history is stored locally, and prompts are rebuilt dynamically from live data rather than cached. This separation matters. The UI doesn’t know about portfolio logic, and the AI layer doesn’t mutate state directly. Everything flows through structured updates, which keeps behavior predictable and testable.
Modern AI coding assistants handled a surprising amount of the build— scaffolding, refactors, tests, even documentation—while I focused on product decisions and structure. After all, I’m a product and agile leader, not an engineer, and that’s part of the point. These tools make building embedded intelligence far more accessible than most teams assume.
If you’re exporting data into AI chats to get insight, you’re adding friction. If you have to re-establish context in every session, you’re adding frustration. But if AI is embedded directly into the system, with live data, rules, and history already available, it becomes dramatically more useful.
My takeaway after building this is simple: AI is most effective when it already understands your context. Stop re-explaining your data, and put AI to work where the work actually happens.
Bob Fischer
Senior Director, Agile Delivery & Training
https://www.linkedin.com/in/bobafischer/