AI Process Safety Tool Engagement Assistance

Over the past couple of years, Process Safety Matters has been exploring the liminal space between AI infused Process Safety Tools and the humans who are increasingly been mandated to use them. The future of AI in Process Safety is already here it’s just unevenly distributed:

  • HAZOP team members are increasingly using ChatGPT and other LLM programs to informally complement the review process
  • Reviews are taking place which include formal contributions from AI HAZOP tools, such as Kairostech HAZOP Assistant
  • High Hazard Stakeholders are developing AI Process Safety tools which aim to drastically reduce, or perhaps eliminate, human intervention in PHA processes

We have experience of engaging with this future and would be happy to share what we have learned:

Event
Collaborators
Learning

Hazards 34 Presentation:
Accelerating Adoption of AI infused Digital HAZOP Assistant Tools

Adoption can be accelerated if Tool suppliers focus on certain aspects:

  • How they align with the Client as it already is
  • How they can benefit a Client which is already working pretty well
  • Persuade a Client employee to advocate for these benefits


The Client can also help themselves by:

  • being wary of giving Tool supplier too rapid access to senior leaders
  • Anticipating a Supplier approach, decide what sort of attributes a new Digital tool should have

Coordination Calls

Feedback from HAZOP Assistant client ‘When my employees start using HAZOP Assistant, I am concerned that they start to trust it too much’

Perenco Gas Plant HAZOP

Response of Review Team members to AI HAZOP Assistant contributions

Dubai Petroleum Sunrise HAZOP

Response of Review Team members to ChatGPT contributions

Prosaic 2025 Conference Presentation:
‘How Humanlike should AI HAZOP Tools be?’

Application of Mollick’s 4 tenets of AI as a co-intelligence:

  1. Always Invite AI to the Table. Engage actively with the many current means of leveraging AI PHA tools (HAZOP Assistant, ChatGPT, others) to help decide how to launch your journey

  2. Be the Human in the Loop. Ensure that you plan for human gatekeepers who iteratively interrogate the evolving AI to assure sufficient understanding and pushback

  3. Treat AI Like a Person (but remember it’s not). Insist that your formal tools communicate with a style which doesn’t lull human team members into the ‘but AI is our friend’ trap. Clever but annoying might work best!

  4. Assume This Is the Worst AI You Will Ever Use. Be prepared to keep evolving your
    approach until AI supported high hazard facilities are as safe as driverless trains

Interested in learning more about what we offer?