
The Maritime Economy Forum is already this Friday!
8 October 2024
Gigaliners Are on the Horizon
30 July 2025Artificial intelligence algorithms have been known for several decades. However, for most of that time, they were handled mainly by scientists. The natural limitation to their popularization was the high demand for computing power, memory and data. However, the development of computers has made computing resources readily available, and the growth of IoT (the so-called Internet of Things) and computer networks has provided a steady supply of huge amounts of data.
At the beginning of the past decade, AI began to penetrate the public consciousness and the media with increasing clarity. First in the form of AI assistants (e.g., Ciri), and later in the form of publicly available LLM models. The AI fad has begun. It caused huge resources to be allocated to the development of this field. However, AI is not just about phone assistants and large language models. It’s also machine learning, analysis of images (including handwriting recognition, which has been used for decades) and sounds, expert systems, classifiers, genetic algorithms and much more.
Artificial intelligence copes well with very complex dependencies, detecting trends and dependencies in data, analyzing incomplete and “noisy” data, finding very good (albeit not optimal) solutions to extremely complex problems (e.g. the “traveling salesman problem”). Not surprisingly, it has quickly found application in areas as important to maritime affairs as transportation problems, meteorological and oceanographic models, or the monitoring and operation of technical systems. In this type of application, the number of factors affecting the result is huge and the relationships between them are very complicated, so traditional computational methods cope poorly with them.
Around 2018, a number of initiatives have emerged to use AI in the maritime industry. Maersk, in cooperation with the Boston Consulting Group, has developed a tool to optimize ship routes and speeds to reduce fuel consumption and emissions. Stena Line, in cooperation with Hitachi, has developed the Stena Fuel Pilot system, which uses AI to optimize ship routes to reduce ferry fuel consumption, with the aim of reducing fuel consumption by 3-5% per unit and lowering CO₂ emissions by about 15,000 tons per year. OOCL partnered with Microsoft Research Asia to use artificial intelligence to save up to $10 million a year through better route planning and improve on-time delivery. Meanwhile, Norwegian company StormGeo uses AI to build advanced weather forecasts, helping optimize sea routes. The Weather Company’s system, developed by IBM, has similar functionality. Also, French company Sinay uses AI in its platform for forecasting meteorological and oceanographic conditions, which is used to optimize ship routes.
Ports, too, are increasingly turning to AI-based solutions. The European leader here is the port of Rotterdam, which uses AI to predict the best time to berth, unload and transship ships. This has reduced waiting times for ships by 20%, and consequently significantly increased the port’s capacity without investing in its infrastructure. Also, the VTS (Future Vessel Tracking System), which monitors both ships and drones used by the port and companies operating in its area, uses AI to detect dangerous situations based not only on the location and behavior of monitored objects, but also on meteorological data, for example. AI is also used to plan port channel dredging operations, manage the life cycle of technical assets and infrastructure. Also, documentation handling at this port has been optimized using AI, saving 810 man-days per year (a 71% reduction) and significantly reducing documentation errors. The port also uses autonomous vehicles to transport containers, synchronized with automated cranes.
The port of Busan in South Korea should be considered a world leader in using AI to optimize port operations. A “digital twin” has been created there – a digital copy of the port, where AI simulates and optimizes ship movement, crane operations and storage space management. It is powered by data from a vast network of industrial IoT sensors, data on the location and operation of machinery and equipment at the port, and even data on the location of every employee at the port. The port’s additional annual profit by using such a solution, resulting from handling more containers, was estimated at $7.3 million.
The enormous media popularity of AI has brought with it significant expectations for the technology. But here, too, with other products and technologies, we can observe the typical product life curve: the initial delight and surge of expectations is followed by a phase of disappointment and disillusionment, and then a phase of stable development and dissemination of a solution. In the case of AI, we are probably entering the second phase – more and more often managers see not only its advantages and opportunities, but also the limitations and threats associated with it.
It is not uncommon to see attempts to solve problems with the help of AI that can be easily solved by traditional methods. Meanwhile, training an AI model requires very large amounts of computing power and memory, and consumes significant amounts of energy. Therefore, it is worth asking yourself at the outset whether the problem you plan to solve with AI cannot be faced more easily and cheaply with more traditional methods.
It can also be a challenge to ensure the right quality and quantity of data. If an AI model is trained with data that contains systematic errors, covers an incomplete range of variability, or is manipulated, it will return incorrect results. If, for example, we train the model with data describing the operation of the system under normal conditions, and then during its use the conditions cease to be normal (for example, extreme weather, economic turmoil), the results obtained may be far from reality.
Another danger associated with AI is the uncritical acceptance of its results. AI models will virtually always return some result, even if it is not correct. What’s more, such an outcome can often look very plausible. Certainly, users of large language models (LLMs) have found this out more than once, when the model starts hallucinating and, for example, comes up with book titles that don’t exist, which was difficult to notice at first glance. This is because LLM models statistically determine the most likely sequence of characters given the input and previous sequence of characters. However, they do not make a substantive analysis of the question and do not make any logical inference. It can be said that such a model is a “perfect liar” – it can mix facts with its hallucinations in such a way that it is very difficult to see. This is how not only LLMs work, but virtually all models using artificial neural networks.
Artificial intelligence is a great tool. It allows to overcome many limitations of traditional IT, detect and exploit unknown dependencies, take over many tedious tasks from human beings. It also has significant potential for streamlining many processes, which translates into concrete profits. However, in order to properly use such tools, it is necessary to understand their specifics and how they work, as well as the ensuing risks. It is also important to remember that this field is evolving very quickly – new tools and algorithms, new applications, but also new threats are emerging. The implementation and use of AI tools should be done in a thoughtful and controlled manner. It is also necessary to understand the issue of accountability for AI actions.
The article was written in collaboration with Namiary Na Morze i Handel – a biweekly magazine providing expert information on the most important events and issues in the Polish maritime economy.