Cyber security of ports a top priority
27 June 2023Investments for Polish offshore wind
9 August 2023Artificial intelligence trails at sea were blazed by a small tugboat, the 28-meter Svitzer Hermod (one of Svitzer’s towing operator ships), which performed a series of remote-controlled maneuvers in the port of Copenhagen in 2017. The tug managed, among other things, to moor, undock, rotate 360° and pilot to Svitzer headquarters, before docking again. The vessel was controlled remotely by the ship’s captain, who was stationed at a base far from the port, and the test cruise was a joint initiative of Rolls-Royce and Svitzer. A short cruise five years ago made history as the world’s first demonstration of a remotely operated commercial ship cruising in port. The successful tests provided the incentive to take the next step, which was to try to use artificial intelligence as a ship’s captain.
Many developers of autonomous ship projects emphasize that in the very near future (i.e., within a few years) there will be craft controlled without the participation of a living human being on the seas and oceans. Artificial intelligence experts, however, quell such enthusiasm. They also explain that the topic of artificial intelligence has been very fashionable lately, so there are a lot of mistakes, distortions and theses about it that have not been supported by evidence. One of the most common of these is to treat all solutions that use machine learning as synonymous with artificial intelligence.
– Machine learning algorithms improve themselves based on the data they have analyzed, emphasizes Pawel Ellerik, a digitization expert at AppVerk. – Suppose we give such an algorithm the task of detecting the threat of a technical fault on the ship. If we provide the algorithm with as much current data as possible, as well as historical data (from previous failures), then we don’t need to show it what data proves the risk of failure. Such algorithm will begin to find correlations and features linking the failures. It will find relationships that we ourselves had no idea about – its ability to accurately analyze very large amounts of data far exceeds human capabilities.
Artificial intelligence, on the other hand, is a much broader concept, according to the expert, because it implies the ability of a machine to take actions similar to those of humans. This is because it is characterized by reasoning, planning, creativity and, perhaps most importantly, learning. Solutions using artificial intelligence will not only help predict a potential problem, but will also take independent action to solve it. In simple terms, machine learning is responsible for making predictions based on data, and artificial intelligence means making automated decisions based on those predictions, among other things. As a limitation of machine learning, P. Ellerik points to the fact that it learns from historical data.
– Immediate prediction (e.g., of a newly implemented solution) is therefore significantly hampered. Equally important, machine learning is about statistical truths. Meanwhile, literal truths are sometimes more complicated, are characterized by high volatility and are not always reflected in historical data – he adds.
P. Ellerik also explains that a statistical truth will tell us that a given vehicle under given conditions should drive 100 km on 10 liters of fuel; the literal truth meanwhile may turn out to be that the vehicle drove 98 km on 10 liters of fuel. Why? Because the historical data showed no correlation with any of the seemingly irrelevant variables (e.g., slightly reduced pressure in one tire).
Magdalena Morze, a senior specialist-analyst in the Digital Economy Research Group at the Lukasiewicz Poznan Institute of Technology adds that the whole difficulty in implementing and accepting artificial intelligence-based technologies actually lies in one factor: trust.
– The situations that autonomous vehicles will face in the real world are highly unpredictable, and therein lies all the difficulty in creating them. In a human-robot interaction arrangement, there is also the issue of trusting such vehicles on the human side. If people are to trust robots/vehicles, they want to know and understand how it works, why and on what basis the machine makes particular decisions over another – M. Morze points out. She also points to studies that show that even the anthropomorphic nature of autonomous vehicles makes a difference in their acceptance. The researchers conducted an experiment in which participants drove both a standard autonomous vehicle capable of controlling the steering wheel and speed, or a comparable autonomous vehicle, only that it had additional anthropomorphic features – name, gender and voice. People who drove an anthropomorphized vehicle with enhanced human characteristics reported that they trusted their vehicle more, were more relaxed during an accident and were less likely to blame their vehicle for causing the crash.
The mass deployment of autonomous units using artificial intelligence could also mean the emergence of new forms of hacking attack.
Łukasz Jachowicz, a cybersecurity specialist at Mediarecovery (a computer forensics company) gives the example of the recently popular ChatGPT, which was supposed to have built-in mechanisms to prevent it from being used for improper purposes. The result of the hackers’ creativity are popular descriptions on the Internet explaining how to convince AI to help write malware or phishing messages.
Article developed with Namiary na Morze i Handel magazine
phot. Namiary na Morze i Handel magazine