Stephan Thoma built the original L&D teams at Google, works at the forefront of AI and has spent the last twenty plus years understanding how organisations evolve skills. As a sought after adviser and coach to progressive companies and people, his nuanced understanding extends beyond just productivity, to appreciate the intersection of human ingenuity and technology.
Michelle Cheng is Talent Director at Notion Capital. Michelle partners with Notion’s founders to develop their leadership team and hiring plans over the course of their Start, Build Scale journey. She specialises in People Insights, Compensation, Org Design and Talent Strategy.
—
In his viral lecture, The Death of Innovation, the End of Growth, economist Robert Gordon argues that there’s been a dearth of game-changing innovations since the Internet, and that humankind has yet to devise a rival to the combustion engine for its impact on productivity. He is sceptical of the rise of AI as little more than a magic trick to incrementally increase, not transform, our potential.
But a conversation on productivity isn’t complete without considering a key factor: Human behaviour. What Mr Gordon overlooks is that to understand shifts in productivity, we need to understand how people learn, grow and, critically, adopt tech, to help them with their jobs. Moving past the narrative of robots stealing our jobs, We predict 4 potential areas of impact on a continuum - some near term and some more far (fetched?) out:
Generative AI’s impact on employee performance thus far seems to have focused on it as tool to enable minor productivity improvements:
With both the above, roles carry on as before, but with potentially bigger OKRs, or better work-life balance enabled for people, or both. It also raises tensions about performance, evaluation, and compensation. Studies have shown that adoption of the Gen AI tools differentially improves mediocre and poor performers, rather than high performers.
How should companies handle this? Is this OK? How should you differentiate (human) performance, given mixed adoption for tools?
Roles will soon need to be recast in the light of the impact of Gen AI, and organisations redesigned. Founders need to recognise their role as arbiter of change and setting company wide incentives as opposed to reactively responding to individual variances in performance.
Every job listing has a list of skills and qualifications required for the role. ‘Skills’ are the most popular taxonomy for defining how companies hire talent, develop people, and plan their future strategies. But increasingly in an AI-augmented human world, skills may be replaced, or at least combined with, a different lens - ‘tasks’.
Sangeet Paul Choudary has written about his thinking about how AI will not replace jobs wholesale (yet!) but will start to replace some of the tasks that people do within their roles. In other words, we’ve started the great ‘unbundling’ of jobs. So thinking about jobs as a set of tasks, instead of skills, is a very helpful way of understanding the scale and pace of the impact that AI will have on jobs.
As AI replaces some of the tasks of a role, the ‘human’ role - evolves into a new, or evolved, set of tasks. This will move fast. Witness the Gemini 1.5pro and GPT4o announcements a couple weeks ago - more and more tasks will be able to be done by the machine. This is no longer role augmentation, it's now task substitution - with human supervision. The outputs of the AI tasks will require human oversight and sign off - so the Human skills of critical thinking and decision making become more important to the new tasks of AI prompting and oversight - (for the moment - again!).
Deconstructing roles into tasks is helpful, to snap-shot and determine the human aspect of roles, and how those roles are framed as jobs, as the unbundling of tasks progresses. Aggregating that up, the number of roles - the bundle of Human tasks and their associated skills - will stress headcount forecasting and workforce planning / budgeting processes - what roles do we need, how many of them, and how stable are they - will become even more tricky.
Then, on top of that, the management aspect kicks in - what human skills - do we need and at what level of proficiency? Do we have those skills in house, and/or do we have to buy (hire) or build (develop)? It will be a fast moving picture for the skills world; skills latency will be a real issue, that is, which skills are more enduring and stable needs, and which are more transient and ‘de jour’. Also, the ‘meta level’ human skills of growth mindset, learning agility, pattern recognition, empathy, collaboration and critical thinking, will become more and more important - but how do we ‘train’ for these?
In the medium term, the rise of AI agents will have a real impact. The recent announcements from OpenAI and Google on the releases of GPT4o and Gemini Astra really accelerate the possibilities here!
At first, goal-driven conversational multi-modal (i.e. with hearing, seeing and well as text capabilities) AI Agents will start to replace roles. For now, the operating expenditure of implementing a full fledged automated agent is still higher than the human equivalent, but the ROI will turn positive sooner rather than later. Aside from the financial considerations, some Managers also may be attracted by the absence of People Management hassles associated with the Humans!
These AI agents, or goal-driven bots, will be powerful but things will really get interesting once we give them personas. With fully developed personality and traits, they will become entities on your team, in their own right. This opens up the world of ‘digital employees;’ imagine instead of hiring a human, instead you create an onscreen avatar of your or team’s choosing, complete with functional career history and experiences, personality, MBTI / strengths profile that compliment the team’s profile. This would be for all intents and purposes another ‘remote’ team member. This is not a dream: it’s totally possible today technologically (check out and chat to Nova at SoulMachines for a glimpse).
This has all sorts of fascinating implications - one could have a fun day brainstorming on this (or we could just ask an AI!), but some that come to my mind are:
From an org design and workforce planning point of view, predicting what Human roles you need in the future, or even 6 months from now, in the mix will be interesting - especially as the AIs capability and utility will change and grow very quickly. Budget negotiations will be about the Human / AI mix and spend across the two, not so much about headcount.
Having too few human roles in the mix, especially junior roles (which are the ones most likely to be replaced by AI agents) brings traps too. Without a sufficient pipeline of junior roles, opportunities to develop organisational insight and capacity (in humans) will become limited, so the internal pipeline of future leaders could become threadbare.
Imagine an AI agent that, in addition to the generic data set that GPT4o or equivalents have been trained on, also having been trained on all your own data - your emails, IMs, posts, documents, presentations, Linkedin updates etc etc in your current role, but also on prior ones, and perhaps also on your personal equivalent digital ‘breadcrumbs’. Additionally you gave it access to video clips of you presenting, or interacting in recorded meetings, so it could understand your professional style, voice tonality, and body language. You upload photo and video clips of yourself to the Avatar creator engine, and you have all you need to create a digital twin of yourself.
Sounds far-fetched? But it’s totally possible, and such a digital twin might be even better than you, in some respects - it would have near perfect recall, be able to make connections and abstracts that might allude us, and never catch COVID or get stuck on public transport, or have bad days!
Such a twin could be very useful… you could have it:
It would learn, through the interactions it has, and potentially get better and better, so that over time you might also have it carry out more complex tasks and even make decisions as your proxy.
This future is now just possible, it’s starting to arrive - Reid Hoffman founder and CEO of Linkedin, created an AI digital twin of himself - and interviewed himself, part as demo and part as exploration of what’s to come. It’s equal parts fascinating and shocking! You’ll find it here
If we go larger, what if all the employees of a company had digital twins, they could work and interact with each other, be goal driven, collaborative and work autonomously - doubling the potential capacity of the organisation?
And why stop at one digital twin? Why not have 2, or 20, or 200? This is wild thinking, but the world of digital twins is also within reach, and we need to start to think about the implications, and our point of view on both the mechanics, and ethics of this on the future of work.
While Robert Gordon may paint a picture of stagnation, the potential for AI to augment human capabilities is vast and exciting. We stand on the precipice of a new era of productivity, not by replacing humans, but by empowering them. Imagine a world where:
The future of work may be different from what we know, but it's a future brimming with potential. By embracing change, fostering a growth mindset, and collaborating with AI tools, we can unlock a new level of human achievement. The journey ahead is sure to have its challenges, but with a spirit of innovation and a focus on human potential, the possibilities are truly limitless.