31 Oct 2023
by Bob Doncom

Intelligent Assistants

Bob Doncom, chair of the IMarEST Artificial Intelligence SIG, demystifies artificial intelligence, machine learning, and autonomy and says there are no killer robots.  

I always joke with people about killer robots, but they do not exist, (well, not on ships yet). There are now autonomous defence mechanisms, so we do have that high end (machine learning taught but not closed-loop driven) autonomy, but there's a whole other layer of uses for machine learning, in particular artificial intelligence with no direct control where you can use it as just another technology. 

Massive analytical power 

For those who want to understand what’s going on inside machine learning, I’d suggest thinking about all those things you learned in your STEM studies, take a look at all the books you used to help you with your statistics and then imagine having a machine that can do 10,000 of those calculations a minute to provide a sea of data that you can then decide what to do with.  That's really where the machine learning side comes in to its own.   

Machine learning allows us to process enormous amounts of data in ways that humans can't always comprehend. We talk about 1 dimensional, 2 dimensional, 3 dimensional and some of us can make the jump to 4th dimensional data (usually time-series), but machine learning can do many, many dimensions. Once it delivers its results, we can project it onto a plane where the human can understand it, because its output can be two or three dimensional even though it might be four or even many dimensional in its input. 

How did this all start?  

A long time ago in the 1990s, IBM designed “Deep Blue” to play chess and it beat a world champion chess player Garry Kasparov. It was a computer specifically built to learn and play chess (which is bounded and controlled). Looking back, it was straightforward (in terms of modern machine learning). It just needed a bit of computer power then, filling two large cabinets. 

After Deep Blue, DeepMind Technologies (now part of Google) built a machine to tackle ‘Go’ which is more complex with more permutations. In 2016 “Alpha Go (Master)” beat a world-rated 9th Dan champion. Then followed a system called Alpha Go Zero which was designed to learn games without having access to human examples. This was reborn as “Alpha Zero”, and then became “Mu Zero” These systems were now able to learn not only chess and go, but also Atari-type games by trial and error, with no instructions or rules. Each system was shown just the controls and the screen. They watched the screen and learned the games like a human. They learned all the games within a short time and played at world champion levels without any guidance. This was brought about by “reinforcement learning” rather than “guided learning”.  

In reinforcement machine learning terms, you get the machine to give itself feedback as it learns by trial and error. It has simple goals being to win/not die/score points etc. So, it quickly learns moves that cause it harm and it learns which moves help, which moves score best. It will gradually hone its game until it can beat human levels of performance.  Deep Mind’s “AlphaStar” learned to play a PC game called “StarCraft 2” (a complex combat-style game with 10^26 possibilities per move). AlphaStar can now beat almost all humans. 

What we can do with these “self-learners” is to put them on opposite sides of a problem and get them to compete. That's adversarial generative AI.  Each “player” can drive to be the best it can and gather huge amounts of combination/permutation data to “learn” what best to do in each imagined scenario.  After that phase you can take the output (which will be a lot), and you can use that to create deterministic responses rather than let the machines “play live” on human affecting controls such as steering and power control. 

AI is part of the early learning sphere. We take that knowledge and use it, for example we might make it into fuzzy logic, or a series of levels of alarm, or levels of response, and hard code those because they are then by their very nature, deterministic (explainable and not random). They have not been arrived at deterministically, but they provide data (knowledge). Once the output comes from the testing, it becomes science (wisdom).  It is something that we can measure, we can prove, and we can repeat. We know that if input X happens, output Y will happen. We don't know how it got to that, but we believe it's the best answer (out of 50 million attempts).  It works like an extremely powerful computer running a “Monte-Carlo” analysis (which was never called AI). 

Adoption 

In terms of adoption, the marine world tends to lag behind the land domain. Something I’m currently looking at is the use of machine learning and technology to maintain equipment (which is a broad brush that goes across everything, land, commercial, marine, and also naval marine already). Everyone wants good “bang for their buck”, they want to maintain their equipment in the most cost-effective manner and hitherto it has mostly been a choice of reliability-centred maintenance or just let it break and replace it. Ahead of RCM there's a space which is driven by condition-based monitoring which is where we can use the data coming back from artificial intelligence and machine learning to improve planned maintenance, and therefore equipment availability. 

A good engineer is often able to hear when a machine is not right and will stop and investigate, however, a good sensor set with a set of graphs and a set of self-comparison, will do that probably one or two orders of magnitude earlier in the cycle because something beginning to fail often starts to make ultrasonic “noise” a long time before it makes audible noise. So just by measuring a bearing and measuring the frequency of a bearing and how it's moving, you can start to work out whether it's getting old or tired. It’s the same with valves. A typical submarine that I might work on has got hundreds of valves on it, all of which wear out and all of which may or may not need maintenance. So, by measuring them and working out which are the favourites to change each time is probably a good thing in maintenance terms. 

This condition-based monitoring is an area that industries are starting to adopt in AI terms. The ability to use machine learning, high technology and raw brute force computer power is now at the stage where you can do this cost effectively. This means a better business case and it becomes more attractive to the commercial world which is fundamentally driven by cost-benefit analysis.  

We’re also seeing progress and adoption of machine learning in such things as route or loading optimization; AI is helping companies work out accurate fuel use for a platform and planning a route to optimize knowledge of expected weather to reduce fuel burden. 

Equally, technology probably could and should tell you how to load a ship, with warnings of when it’s not being loaded properly. Survival of a ship after damage is another area where it can benefit us. In the past, a master mariner would look in some books and at stability charts to decide what needs to be done. Now, a ship will know almost instantly and can start closing valves, optimising the remaining amount of platform to keep itself safe.  

On the superyacht front, we’re also seeing the technology adopted but it’s still largely driven by cost of application and trust.  

So, we can see that machine learning is popping up everywhere not just in the autonomous vessel space. 

One of the biggest problems impacting adoption is that the technology market is moving so fast that it's a bit like a fast-spinning roundabout, everyone fancies a go, but it’s going too fast for people to want to jump on right now, because they know that next time it comes around it will be a different technology on the roundabout and they will want to get on again. People are waiting to see where the technology's going before they jump on. 

Intelligent Assistant 

I always say it’s IA rather than AI, because it's an intelligent assistant to the crew, and that's what it should be in that space; it's there to give people better information. It's there to give people better decisions or prompt them in the more time essential decision making, it can help you reduce fuel burden, help change behaviours to make your ship a cleaner and greener, it is just a set of tools to help you do these things using a “bigger computer”. 

The biggest piece of advice for anyone, but especially those starting their careers today is don't fear it, it doesn't do anything wild and wacky. It's just another computer. If you sit in front of a modern control console and machinery space, you've still got just as much say in what goes on as you would have if it was an AI machine behind. 

For those considering a career in machine learning or AI, I’d advise them to learn a computer language. I don't care which one because it is 80% of the step towards understanding any computer language, the rest of it is syntax. I’d also suggest they go back and restudy statistics. The bulk of what machine learning does is statistical analysis, so if you understand a Monte Carlo process that's no different to a generative AI looking at a problem really, it can look at it in deeper ways, process bigger numbers faster, and even tweak the way it is looking at the problem slightly using deliberate randomness, but it's still doing essentially the same thing. 

Trust 

At the end of the day, the key thing about the adoption of machine learning, artificial intelligence and autonomy, (alongside cost-effectiveness), is developing people’s trust in the systems and helping them believe that the systems can be relied upon to do what they are meant to do. 

As with the adoption of any new technology, we need to bring people along with us and sometimes it helps to bring it back to the everyday things we take for granted. For example, we drive cars with ABS, electronic steering, lane control and so on, they are technologies that assist us, we trust them to do their job, keep us safe, and we adopt them, let’s embrace, rather than fear the future of any new technology in marine.