29 Jan 2026
by Dr Sam Andrews

37% of marine professionals ‘witnessed AI failures’

There appears to be cautious enthusiasm for AI in the maritime sector, although over a third of respondents from a recent survey had directly experienced issues.

The public release of ChatGPT in November 2022 marked a turning point for artificial intelligence (AI), thrusting it firmly into our working lives. The maritime sector has been no exception, but as a recent report from Thetius and Marcura highlighted, despite overall growing enthusiasm for the technology, the maritime sector remains cautious.

Drawing on a survey of 130 maritime professionals and in-depth interviews, the report ‘Beyond the Hype’ found that while around 80% of respondents believe AI can improve efficiency, and a similar number are running pilot and small-scale projects, just 11% have formal AI policies and 37% have personally witnessed AI failures.

What we trust AI to do (and where we hesitate)

The report showed that maritime professionals are already comfortable using AI in a range of support functions. Respondents highlighted applications such as reducing manual workflows by organising inboxes and flagging key information, scanning and analysing legal documents, identifying risky voyage decisions, and monitoring regulatory compliance.

Indeed, Bob Doncom, FIMarEST, chair of the IMarEST Artificial Intelligence Special Interest Group (SIG) says that using AI to help design policies around operational procedures could prove extremely beneficial because it can give us an idea of what is likely to happen under different situations.

“People can cope with one, two, three variables, whereas AI can go seven, eight, or more… we can throw in all sorts of variables and run scenarios to see the likely outcomes,” he says.

What unites these examples is AI supporting human decision making where people remain responsible for interpreting the outputs and then acting.

“Human in the loop is absolutely brilliant – unless you can’t afford the time,” stresses Doncom. In rapidly evolving situations, such as collision avoidance, the volume of information and speed of events can overwhelm human operators.

“There may be many different things happening at the same time… seafarers – like any human being – can only take so much input at once,” Doncom explains. “As the situation gets more complex and the threat gets closer, the humans may be just too slow.”

However, just because AI may be able to arrive at an answer faster than a human, that answer is not necessarily the correct one, according to Doncom. “The machine might be wrong, but if you don’t do anything [in a rapidly evolving collision situation] you’re wrong as well.

“We’ve got to find out where the benefit of relying on autonomy or AI overcomes the risk, when not relying on it is more of a hazard than relying on it.”

A question of control and trust

Trust sits at the heart of the report's findings. While concerns about data privacy and cybersecurity play a role, survey respondents rarely attributed AI failures they personally witnessed to the technology itself, but to unrealistic expectations, insufficient training, and lack of human oversight.

These findings challenge the idea that mistrust stems from fear of AI. Instead, it suggests that confidence is partly undermined when systems are poorly explained or oversold.

“Particularly in the early days, AI was almost the silver bullet. Everyone was keen to sell it, but they didn’t have that big a grasp on the technology,” says Doncom, adding that, “the cynicism of the general public and the media has helped rein that in.”

Professor Tor Arne Johansen, who has worked with AI for more than three decades, says there is a sense that the technology is moving faster than society’s ability to govern it.

“People are fascinated and overwhelmed by it… Things are happening very, very quickly, and it can look like it’s a bit out of control,” he says, noting how easy it is now to create fake images and more with generative AI tools. “People are asking how we can manage it, control it, use it in the right way.”

Part of using AI in the right way lies in training, like creating quality prompts to get the best outputs, how to interpret outputs, and building trust in AI as an enabler rather than a replacement for the workforce.

smartphone displaying the ChatGPT app page_INLINE.jpg

Specialist systems

Arguably, most people’s experience comes from generic generative AI tools such as ChatGPT. Both Doncom and Johansen – and indeed the report – note that generic tools are not well suited to the maritime sector.

Rather, maritime AI tools need to be both domain-specific and use-specific, designed around clearly defined operational problems, workflows and risks. Developing such tools is the focus of the recently opened Norwegian Maritime AI Centre.

“Our objective is to operationalise AI in the maritime sector, and we will do that through three pillars,” says Johansen, who is the Centre Director at the Norwegian University of Science & Technology, which runs the centre.

“One is to build a technology foundation, where we exploit a lot of the drive that is in AI today, but we combine that with the domain knowledge. Second, we’re going to work with our partners on selected use cases.

“The third pillar is we’re going to work is more broadly on sharing knowledge, education, training and working with stakeholders.”

The centre’s remit reflects a broader recognition that maritime AI cannot be developed in isolation. “We have a very broad focus involving anything from shipbuilders and shipyards to shipowners, equipment manufacturers, authorities, and classification societies,” Johansen says. “It’s not only about the technology. It’s about the people, organisations and businesses.”

One of the most persistent challenges facing specialist maritime AI is domain-specific data. Generic models have been successful in large part because they are trained on vast volumes of text and images from the internet.

By contrast, Johansen notes, maritime data is often proprietary, fragmented and uneven in quality, limiting both what AI systems can learn and how reliably they perform.

Data creates value when it is used, yet sharing it raises commercial and competitive questions. “The data quality and the amount of data are, I think, one of the key factors why AI projects sometimes fail,” concludes Johansen.

 

Tell us what you think about this article by joining the discussion on IMarEST Connect.

Image: concept of AI cargo container ship. Credit: Shutterstock.

Inline image: smartphone displaying the ChatGPT app page. Credit: Shutterstock.