AI software to make tidal energy monitoring easier
The DeepSense team at Dalhousie University in Nova Scotia has developed a new software that uses artificial intelligence to make environmental monitoring for tidal energy projects easier.
Namely, the DeepSense team has created Echofilter – a new software that uses artificial intelligence and machine learning methods to automate processing of environmental monitoring results from echosounders, a type of sonar that uses sound to detect fish and other marine life.
The software, developed as part of the technology development and testing initiative Pathway Program, is expected to contribute to more reliable and more effective monitoring of tidal turbines.
Daniel Hasselman of Fundy Ocean Research Centre for Energy (FORCE), and Pathway Program lead, said: “Echofilter developed by DeepSense is now being used by our team at FORCE and we are able to run the software for a variety of hydroacoustic data sets.
“We’ve found that Echofilter is accurate and reduces the time spent manually processing hydroacoustic fish data collected in the Bay of Fundy by approximately 50%, doubling our productivity. Better yet, Echofilter is more responsive to the dynamic range of tidal conditions we experience in the Bay of Fundy and requires far less time to manually edit prior to data export and analyses”.
Chris Whidden, assistant professor in Dalhousie University’s Faculty of Computer Science who led Echofilter’s early planning and design, said: “We worked very closely with FORCE and OERA to understand their unique challenges, propose an initial design and modify that design during the project. The first step in using machine learning to improve business processes is understanding the data and where the biggest costs and time sinks are. This can be especially challenging in the ocean economy because of specialized sensors, proprietary processing software, and unique environmental features”.
Using these insights, Scott Lowe, a postdoc in the Faculty of Computer Science at Dalhousie led in developing a deep learning model using existing data to automate this process and save the organization a significant amount of time.
Lowe said: “Our challenge was, can we automate that task, so the human doesn’t have to spend so long manually removing all of these bubbles from their data? Rather than being manually defined, the machine learning driven model that we trained is based on data.
“We trained the model by taking all those previous surveys that human experts have already annotated and asking the model to generate the same separation line around the entrained air as labelled by the human. After training, the model produces very similar separation lines as the human annotators”.