Update Jul 2009

The marine industry does not only need to manage documents and statistics. The statistics need to make sense too.

For years document management and Quality assurance have gone hand in hand outside the shipping industry as well as inside. Statistics derived from fine grained reporting have been popular for QA and data collecting in documents has been the collection medium of choice.

In shipping QA as in most industries is still primarily associated with compliance and certification.

However TMSA, and especially chapter 12, incites us to rethink whether documents and statistics are good business indicators or not, chapter 12 being the TMSA chapter that talks about comparing internal audits with external audits. This comparison no doubt helps oil companies keep track of whether the company monitors its own operation more closely than external inspectors and auditors.

But what is the point of this?
The point no doubt is to compare internal assessments with external and determine the quality of common sense management practices: whether internal quality controls exceed external controls, in which management areas undesired events occur where to focus next in the continuous improvement process, etc.

But there is a problem; internal and external audits may cover different areas of focus and certainly external inspections all differ between them.
Is this normal? Of course it is. It is a different point of view on the same subject. Just as different customers like different things about the same product. So it is not surprising, in fact it is a normal development to have a CDI report format with different items and in a different order than that of SIRE reports which in turn differ from Port State inspection formats, which are all different from internal inspection criteria, which again are different form ship to ship.

But why does it pose a problem for QA software or DM software or enterprise software in general?
QA and document management software should not be about documents or statistics alone. It is primarily about how to group similar things, that initially do not seem similar so as to draw the right conclusions.

CDI, Port State, SIRE and any other inspection format is designed with many goals in mind and with a view to being convenient to the inspector reminding him where to look, allowing consistency, etc. On top of this the inspection criteria need to cover the perceived needs of quality assurance of the issuing authority. This is a major undertaking consisting in combining multiple goals and opinions within the issuing authority. Finally for the ship-operator and customer the results of the inspections need to provide meaning to assess and improve the management quality and hardware of the vessel being inspected. So we have very different inspection goals that combine to create each external authority’s inspection criteria. Then we have several authorities and the ship operators own inspection criteria.

But when all these are combined, can we eventually determine whether the ships inspected are managed well and their hardware is in the right condition? Of course any qualified shipping professional can do this by reading the reports and making a qualitative assessment. But a quantitative assessment with statistical analysis can be highly misleading.

In order for it not to be so, not only do the data categories need to be matched but they also have to group nicely into more general categories, that are meaningful in the context of assessing management quality. So while the categories have to fit common fine-grained criteria, they also have to group well with respect to problem categories or processes or even corporate goals.

Here are some examples;

  1. A non conformance about failure to follow the right process for a replacement oxygen and gas meter seems different from a non conformance regarding failure to perform a risk assessment before shutting down a sea water pump and blanking it off. One is about a gas meter and the other about a pump. But they may seem quite similar with respect to perceptions of the importance of safety. Seen from the viewpoint of a mariner who has not been informed about some very relevant details regarding what has been done in his absence, they are even more similar: a gas meter that behaves differently than expected and a pump that has been blanked off, very differently from what he would expect.
  2. A crack in the flange of a fuel pipe in the purifier room may seem quite different and also quite similar to a crack in the flange of a pipe feeding the main engine injection pumps. However if the main engine pipe is part of the fuel pump structure the cause of the problem can prove to be quite different from the damage on a regular pipe fitted by the shipyard.
  3. The injury of a crew member due to a fall on a slippery part of the deck may seem to be quite similar but also quite different to the injury of another crew member who recently signed on and was not aware how to use the lathe in the engine room. In the first incident the injury may have resulted and be related to a number of safety precautions that need to be taken while crew is working on deck, such as cleaning the deck to remove oily residues, wearing safety shoes, painting the deck surface with a special non-slip coating, while the second incident might be related to training issues, lack of experience and absence of written instructions of how to use the lath.
  4. A malfunction of the hydraulic system of the life boat lowering system that has caused damage to the wires that hold the boat, is a different problem to resolve than the damage of the wire of the provision crane caused by lifting over heavy machinery. In both cases there are damaged wires but the root cause is different.

Until the root cause has been ascertained and the problem resolved, there can be many categories to which the problem can belong, and they can change too as the process gets resolved.

So not only do we have a problem designing similar inspection formats, we have a massive challenge assigning observations to the right categories, in order to solve the problems of manage risk and assess management quality.

So imagine how happy your financial officers will be when they come to realize you could assess your management better before you started using software and statistical reports than you can now.

Fortunately not all is lost. In the information age, access to information and data collection is important. If you are collecting data this is good. If you are categorizing it properly as it goes into your system, you have solved 90% of the problem of using software to improve the quality of your management. However to do this the system has to track the work of your staff as they deal with let’s say a defect or non conformance and resolve it. If the system does not collect salient points in the processing of daily problems no statistics can ever say anything useful about management and improvement.

However since there could be many salient points that make an expectation failure such as a crack in a fuel pipe important, the system has to present the right ones to the user at the right time otherwise its quicker for the user to write down comments in a common language that the computer cannot recognize and thus cannot present later in statistical form. For example is there a crack or a welding pore, is the pipe under external stress, is the pipe a high pressure pipe with a wall thickness limitation? Is there a maintenance problem common to these pipes? Even more urgently, is there a process that this breakdown affects and does it call for risk assessments at a variety of levels? But how would the system know it’s a fuel pipe so as to consider corrosion as of unlikely relevance, how would the system know about the pipe configuration and design limitations so as to ask relevant questions about cause and effect?

Herein lie the reasons why computers do not help with management decisions. To do so the system has to understand (this means have a model within its data structure) of everything important about the enterprise and everything about what the user is doing (for example in this case reporting a defect).
In the gas meter example how would the system know that gas meters can cause death if they are incorrectly operated? So why would it consider asking the user if the replacement gas meters procured operate in a way that the new user can expect? Why this is not a question normally asked when buying new binoculars or chipping hammers. And when reporting the ballast pump isolation process how would the system know that the blanking off process requires some consideration of coinciding factors so that someone does not inadvertently flood the engine room.
The system could of course ask you all the questions it knows regardless of context, but will this help the management process or delay it? Would senior officers like masters and engineers who are responsible for resolving problems tolerate answering irrelevant questions?

You may ask what has this to do with maritime audits like CDI and SIRE. Well, a failure to apply the correct change management process to a gas meter could be categorized under Change Management which would make sense, or under Gas Meters, or under Tank Entry, or under Safety Equipment, etc. But which one would best help indicate management quality? And how can the inspector who records the observation be relied upon to assign this defect to the right category out of hundreds that exist and even some that don’t! If the assigning of the root cause is incorrect, what is the point of taking a statistical count from this categorization? Even if the finding is correct and the non conformance is assigned under failure to manage change, would this carry the same weight and should it be considered on a par with buying binoculars and chipping hammers without going through a change management procedure?

In the information age, the information collection problem having been solved, what we now need is to solve the information categorization problem. Never was this more relevant than today when the obstacle of access to information is behind us.

At Ulysses we have built a model within our software that makes it easy to apply the right categories to data without burdening the user with huge lists of potentially relevant criteria that take more time to record than it is worth.

So CDI, SIRE, Port State and internal inspection findings can be categorised by staff onboard and ashore as a by-product of reporting allowing for meaningful management conclusions to be drawn thereby satisfying the goal set by TMSA chapter 12.