How We Trained Account Leads to Run AI Workflows — Without Making Them Engineers
The Real AI Adoption Gap
Most agencies that invest in AI tools underinvest in the human layer that operates them. The result is predictable: the tools get used inconsistently, team members develop their own ad hoc habits, and the efficiency gains that justified the investment never fully materialise. The technology works. The adoption does not.
The gap is almost never technical capability. Account leads can learn a new interface. They can follow instructions. The gap is operational clarity — who does what, when, with which AI tool, and in what format, consistently enough that the output is reliable and the process can be improved over time. Without that clarity, AI adoption stays at the individual level. It never becomes an organisational capability.
Solving this required us to think less like software implementers and more like workflow designers.
The G3 Strategic Framework: Assess, Plan, Execute
Every AI-enabled workflow at Goose Digital runs through the G3 Strategic Framework. Three phases, in order, for every engagement and every internal project: Assess, Plan, Execute.
Assess means understanding the current state before any AI is involved. What does the client’s content landscape look like? What are the gaps? What does success look like, specifically? This phase is entirely human — it is the judgment work that defines what the AI will be asked to do.
Plan means translating that assessment into structured inputs. Not a vague brief, but a parameterised set of instructions: the audience, the voice parameters, the content type, the subject matter, the constraints. AI executes against structure. The Plan phase is where that structure gets built.
Execute means running the workflow, applying the appropriate plugin or skill, reviewing outputs, and approving at the defined checkpoints. The AI handles generation and formatting. The account lead makes the judgment calls about quality, emphasis, and timing.
This sequence is what makes AI work repeatable. Any account lead who understands G3 can pick up a workflow, understand where they are in it, and know what their role is. The framework carries the institutional knowledge so the individual does not have to.
Skills Libraries and Plugin Architecture
The skills library is a catalogue of repeatable AI tasks that account leads can invoke without building the context from scratch every time. Each skill is scoped, documented, and tested before it goes into the library. A skill for writing a service page is different from a skill for writing a newsletter introduction, which is different from a skill for drafting a Google Business Profile update. Each has its own inputs, its own parameters, and its own expected output format.
The plugin layer sits underneath the skills library. Plugins carry client-specific context — brand voice, audience definition, approved proof points, content restrictions — and are named following the gd-[function] convention. When an account lead invokes a skill for a specific client, the relevant plugin is already there. The system knows who the client is, what their voice sounds like, and what the content parameters are.
The practical effect is that onboarding a new account lead to a client account becomes much faster. The workflow carries the institutional knowledge. The new person does not have to absorb years of accumulated understanding about a client before they can produce on-brand output. They learn the framework, learn to invoke the right skills, and the system handles the rest.
Claude Cowork as the Execution Layer
Claude Cowork is where account leads actually run workflows. It is not a chatbot interface — it is a structured workspace where plugins, skills, and client context come together and AI executes against them.
The key design principle is that Cowork preserves human expertise while leveraging AI speed. Account leads make the judgment calls: is this the right angle for this client right now? Does this piece serve the content plan goal? Is the tone calibrated correctly for this specific audience? Those are human decisions. The generation, the formatting, the structural consistency, the application of brand parameters — that is what the AI handles.
This distinction is important because it is the part of AI adoption that most often gets collapsed. When AI is positioned as the decision-maker, human expertise atrophies and accountability disappears. When AI is positioned as the execution layer for human decisions, the human expertise gets amplified. Cowork is built around the latter model.
What Changed After We Rolled This Out
The most visible change was where account lead time went. Less time on repetitive drafting — brief interpretation, structural setup, format scaffolding, first-pass copy — and more time on the strategy and client relationship work that actually differentiates a good account manager from an average one.
Consistency improved across the team in a way that is hard to achieve through training alone. When the system carries the brand parameters and the workflow structure, the output from one account lead looks like the output from another. That consistency is what makes it possible to scale a content program across a client base without quality degrading as volume increases.
The GD Maintenance rollout is a useful parallel. Before it existed, running a billing audit or a retainer report required someone technical. After it existed — a browser, the right URL, and thirty seconds — any account manager could run the same report. The technical dependency disappeared. The same dynamic plays out in Cowork: the framework removes the dependency on individual expertise and makes the capability organisational.
The Honest Lessons
Rollout is not instant. Teams need time to trust a new system before they will use it without prompting. The early adoption period involves more hand-holding than the framework itself would suggest, and that is normal.
Documentation matters more than most agencies expect. The G3 framework only works if it is written down, kept current, and treated as a living operational reference — not a PDF that gets read at onboarding and never touched again. The same applies to skill scoping and plugin documentation. If it is not maintained, it atrophies.
The best AI workflows are designed with the account lead in mind, not the engineer who built them. The friction points that matter are not technical — they are workflow friction: moments where the system requires more steps than feel necessary, or where the output format does not match what the account lead actually needs to do next. Removing that friction is ongoing work, not a one-time configuration.
Looking Ahead
The framework gets stronger with each new client engagement and each new skill added to the library. The skills that get used most inform where to invest in refinement. The patterns that work in one account type often translate to others with minimal adjustment. The institutional knowledge that previously lived in individual expertise gradually becomes a property of the system.
That is the long-term point of all of this: building an operation where the AI capabilities compound over time because they are embedded in workflow infrastructure, not in individual habits that walk out the door when someone leaves.