Operation Optimisation
If downtime isn’t caused by people or equipment — who is responsible?
The usual suspects were found — and again, it wasn’t the people
A returning client raised concerns about a drilling rig start-up where what was initially accepted as “normal start-up pain” had slowly become the baseline. Excessive non-productive time — both visible and invisible — persisted, along with repeated rework and continuous last-minute firefighting.
The symptoms were familiar:
- Equipment not available when needed — or failing during execution
- Decisions made late and with limited context
- Plans constantly changing
- Crews waiting, compensating, and improvising
Because the project had been rushed out of the gate, it was assumed things would “sort themselves out” once people found their rhythm. That explanation was convenient — but wrong.
It was agreed that Drillconsult would step back and look at the operation as a system — not to assign blame, but to understand why execution never stabilised and what would actually fix it.
First: understand the system — not the symptoms
Before touching anything operational, an overview was built of:
- The execution model
- The organisational setup
- The planning and decision-making processes
- The supporting systems, reports, and interfaces
The intent was simple: understand how the system was designed to work before judging how it actually behaved.
Only then was a structured, hypothesis-driven approach finalised.
The alignment that usually comes too late
Before stepping onto the rig, a final alignment was carried out with both leadership and frontline stakeholders on:
- Scope and priorities
- Roles, decision authority, and interfaces
- Expectations, deliverables, and timing
- How the end-to-end execution was intended to flow
This alignment deliberately covered two horizons:
- Short-term actions to stabilise execution and rebuild confidence
- Longer-term improvements to strengthen robustness, capability, and ownership within the organisation
What is often missed — and was also missing here — is early alignment with the people who actually execute the work. Frontline personnel were brought in only after plans and key decisions had already been made, removing their ability to challenge assumptions, highlight practical constraints, and take ownership of the plan.
Skipping this step is a classic failure mode in rig start-ups. It almost always results in superficial fixes, recurring problems, and growing frustration — at a steadily increasing cost.
Boots on the ground
Once on site, the priority was to build trust and transparency.
The symptoms were immediately obvious. The root causes were fewer.
Rather than jumping to conclusions, observations were combined with structured data collection:
- Direct observation of operations
- Conversations across functions and shifts
- Review of how work was actually planned and executed — not how procedures described it
Conclusions were drawn only once the data had sufficient quality and granularity.
Learn deliberately — not accidentally
Early in the process, it was agreed to conduct Section After Action Reviews (AARs) after each completed section.
These were not “lessons learned theatre”. They were used deliberately to:
- Reinforce that learning is part of execution — not something saved for the end
- Build competence while experience was still fresh
- Continuously validate and refine the overall analysis
- Demonstrate the value of structured reflection
Based on these learnings, short-term corrective actions were implemented:
- Coaching of crews and supervisors, supported by indirect competence assessments to identify gaps
- Clearer meeting structures and facilitation, supported by templates and guidelines
- Targeted training linked directly to execution gaps
- Clarified roles, responsibilities, and interfaces — documented in writing
- Active work on trust, psychological safety, and ownership
Execution was continuously monitored to verify that improvements worked in practice, not just on paper.
What the real causes turned out to be
The real causes were not unique. They are the same systemic failures repeatedly seen in underperforming rig start-ups and drilling campaigns.
These are not people problems.
They are system failures — shaped, funded, and tolerated by leadership decisions.
1. Weak execution readiness and poor front-end alignment — especially with the frontline
Execution-readiness processes (DWOP or equivalent) were either missing or treated as a formality. Critical risks, interfaces, competence gaps, and logistical constraints were identified during execution rather than mitigated upfront.
Frontline personnel were involved too late to meaningfully influence the plan.
2. Unclear roles, responsibilities, decision authority, and communication channels
Work was assigned based on job titles rather than demonstrated capability. Decision rights were unclear, slowing execution and increasing rework. Little effort was made to verify that people in critical roles were equipped to make the decisions expected of them.
3. Poor on-site planning, look-ahead, visibility, and communication
Look-ahead planning was high-level and largely confined to leadership. Crews lacked forward visibility and context. Those executing the work did not clearly understand how their output affected downstream activities or other teams.
4. Weak meeting discipline and ineffective communication
Communication channels were unclear or inconsistently used. Meetings lacked clear objectives and rarely concluded with aligned decisions or actions. Information was shared late, incompletely, or not at all.
5. Knowledge hoarding and reluctance to give away responsibility
There were no robust structures to share knowledge. In some cases, knowledge was treated as power rather than as a shared operational asset. Responsibility was retained rather than distributed.
6. Low trust and limited psychological safety
An “open door policy” was referenced but not practised. Personnel did not feel safe escalating concerns early. By the time issues surfaced, options were limited and costly.
The uncomfortable truth
The equipment didn’t fail first.
The system failed first.
When a rig continues to struggle long after start-up, it is rarely a “rig problem”. It is almost always a planning, alignment, communication, and ownership problem.
People were doing their jobs — but they did not understand that their output was someone else’s critical input. When quality degrades at one interface, the cost is paid downstream — in time, safety margin, and trust.
Systems do not fix themselves during execution.
The longer-term corrective actions will be covered in a follow-up article.