Skip to content

Why Convictional implemented four day work weeks

DateJuly 1, 2025
Read7 Min
AuthorRoger Kirkness

Since ChatGPT came out, every six months or so new model releases require rewiring my brain around the possibilities and drawbacks. Each time I feel like I've figured out what the impact of AI will be on our business, it surprises me by showing newly useful behavior.

While I've been using generative AI at work since ChatGPT was released, the new generation of frontier models (e.g. OpenAI o3, Claude 4 and Gemini 2.5) felt about as profound an improvement to me as when OpenAI released GPT-4. It felt like another meaningful step forward in terms of reliable agentic behavior, where the AI can take a relatively complex multi-step project and complete it within a few prompts.

This notion that the new models go (very crudely) from 20% to 50% reliable for writing or coding tasks on their own may simply seem like yet another somewhat illegible benchmark, but when you're using it to its limit every day the difference is shocking. All of a sudden you go from having a single genius-but-impractical intern, to having the fractured attention of a team of 4-5 perfectionistic intermediate level software engineers. That is not a small change, it is one of the most profound changes in the history of work.

Our research team recently did some work that modeled the constraints within organizations for agentic AI adoption and it comes down to trust. Trust in the sense of how reliable the models are, and also in how much the people trust them, and how much people trust each other. Our research found that without trust, AI adoption happens far slower, which puts organizations into a vicious cycle of overwork and depleting trust.

I decided to zoom out and think about what was going to continue to be true about work even as agents became reliable across the spectrum of work in our organization. The reality is that coding agents are about a year ahead in terms of overall reliability as other types of functional ones. So while software engineering is experiencing massive disruption, other functions are only starting to feel the effects of these changes day to day. But what's happening in software engineering will soon come to every role, level and function. It will come to regulated professions, and to all forms of knowledge work.

As much as agentic AI can be emotionally uncomfortable to think about, it is necessary to lead human organizations through it as a technological change like any other. Someone in each customer domain is going to adopt it and become better at serving their customers. This will force adoption across the board, whether you go kicking and screaming or with curiosity and patience is ultimately up to the leaders of organizations to navigate.

My reflection was that what matters now is creativity, human judgment, emotional intelligence, prompting skills, and deep understanding of customer problems. These skills can't be measured in hours at a desk. In fact, trying to maximize hours often works against developing these abilities. Just like athletes need recovery time to perform at their peak, knowledge workers need time to recharge their creative and intellectual energy.

It was at this point, well into experimenting with agents and spiraling on the consequences they would have, that I spun up a Convictional decision process in order to come to a resolute decision. It's a workspace where you can centralize context for major decisions, including the goal, the options, the criteria to evaluate options and links to the inputs that inform the decision. If you want to try them out, you can login and try it here.

My goal, verbatim from the start of the decision process, was as follows: "My goal is to maximize the productivity of the team, including attaining product/market fit and building a large and successful company. Within reason, whatever leads to the outcome is the aim I have considering different hours." While the benefits to the team of a shorter work week were clear to me, the benefits to the organization was the lens I wanted to evaluate.

I formed my hypothesis around different things to try based on this desire to actually accelerate the overall work of our organization, which I recognized was constrained not by work hours or effort but by the speed of our AI adoption, in addition to creativity, judgement and deeply understanding customers. Fusing those things is really what determines the organizations that win in the future so I looked for things to try.

I happened to read an article about Juliet Schor's upcoming book about the four day work week. Her research goes back multiple decades, and more recently spans every major economic power in the world as well as large and small organizations. I consumed what I could find online and eventually tracked down and read her new book cover to cover. By the time I got about half way through the book, I was confident that this would be a more effective way to create trust in the organization to adopt AI, and get more things done.

I finalized the decision in our app about a week after starting the formal process, and about a month after first starting to experiment with the newest level of models. I knew that this decision would be scrutinized by every stakeholder, including the current team, future team members, our investors and customers. It had to address each party's interests, and while not a zero sum set of people to consider, still important to be rigorous about. The research is strong, and the implementation is equally important to success.

I decided to make the shift to a four day work week permanent, with Friday as a customer on-call day for sales and support, but no internal meetings, operational work or internal communication. We can't avoid working entirely if customers are working, which most of them do on Fridays and will for the foreseeable future. I wanted to signal to the team that the benefits of increasing automation were something that we plan to continue to distribute fairly. This was the first time it felt like such a large amount of what we previously considered work would be automate-able that it felt like the right moment.

I also decided not to propose or implement changes to the team's compensation. Overall my belief is that our quality of work will improve and the volume of work output will increase. While this perspective is informed by the realities of artificial intelligence, it is also a currently contrarian point of view. Most people's vision for what AI will bring centers mostly on zero sum efficiency gains, and rarely considered the second order effect on organizations ability to improve how it serves customers. My belief is that our ability to serve customers is improved so much by AI (implemented correctly) that any reduction in work hours is offset by the creative and decision quality gains over time.

While AI is going to lead to a devaluing of many things we used to consider valuable work, it will also increase the value of things that we do that are uniquely human. To collaborate among a team of people, and reach unique conclusions about how to serve customers, will improve every service we consume. Things that people want which previously went underserved become higher quality and things that were previously impossible become possible. Whether the future is good in an AI world comes down to policies like this one. Sharing the benefits of automation tangibly with the people inside an organization through four day work weeks improves trust and ensures people can sustain their contributions.