[New Paper Published] Deep Learning in Plant Phenological Research: A Systematic Literature Review

Negin Katal conducted a systematic literature review to analyze all primary studies on deep learning approaches in plant phenology research. The paper presents major findings from selected 24 peer-reviewed studies published in the last five years (2016–2021). 

Research on plant phenology has become increasingly important because seasonal and interannual climatic variations strongly influence the timing of periodic events in plants. One of the most significant challenges is developing tools to analyze enormous amounts of data efficiently. Deep Neural Networks leverages image processing to understand patterns or periodic events in vegetation that support scientific research.

 

“[…]deep learning is primarily intended to simplify the very time-consuming and cost-intensive direct phenological measurements so far.”

 

Technological breakthroughs powered deep learning approaches for plant phenology within the past five years. Our recently published paper describes the applied methods categorized according to the studied phenological stages, vegetation type, spatial scale, data acquisition. It also identifies and discusses research trends and highlights promising future directions. It is freely available here: https://www.frontiersin.org/articles/10.3389/fpls.2022.805738/full

To understand how scientific research on phenology is done, the different methods to retrieve phenological data need to be clear:

  • Individual based observations are for example human observations of plants, cameras installed below the canopy or even reviewing pressed, preserved plant specimens collected in herbaria over centuries and around the globe.
  • Near-surface measurements cover research plots on regional, continental, and global scales and are done for example via PhenoCams, near-surface digital cameras located at positions just above the canopy, or drones.
  • Satellite remote sensing is done via satelite indices, such as spectral vegetation indices (VIs), enhanced vegetation index (EVI), and more.

Overview of methods monitoring phenology.

Key findings:

The reviewed studies were conducted in eleven different countries (three in Europe; nine in North America, two in South America, six in Asian countries, and four in Australia and New Zealand) and across different vegetation types, i.e., grassland, forest, shrubland, and agricultural land. The vast majority of the primary studies examine phenological stages on single individuals. Ten studies explored phenology on a regional level. No single study operates on a global level. Therefore, deep learning is primarily intended to simplify the very time-consuming and cost-intensive direct phenological measurements so far.

In general, the main phenological stages are the breaking of leaf buds or initial growth, expansion of leaves or SOS, the appearance of flowers, appearance of fruits, senescence (coloring), leaf abscission, or EOS. More than half of the studies focused either on the expansion of leaves (SOS) or on the flowering time.

Across the primary studies, different methods were used to acquire training material for deep learning approaches. Twelve studies used images from digital repeat photography and analyzed those with deep learning methods. The publication shares in-depth information of the different types of digital photography suitable to provide those training data.

Furthermore, it categorizes, compares, and discusses deep learning methods applied in phenological monitoring. Classification and segmentation methods were found to be most frequently applied to all types of studies, and  proved to be very beneficial, mostly because they can eliminate (or support) tedious and error-prone manual tasks.

Future trends in phenology research with the use of deep learning

Machine learning methods need huge amounts of data to be trained. Therefore, increasing the absolute number of collected data is one of the key challenges – especially in regions or countries that lack traditional phenological observing networks so far. The paper describes methods and tools that will become important levers to support this kind of research, for example:

  • Installing cameras below the canopy that automatically take pictures and submit them over long periods of time are one way to tackle that.
  • PhenoCams prove to be a new and resourceful way to fuel further research: With indirect methods that track changes in images by deriving handcrafted features such as green or red chromatic coordinates from PhenoCam images and then applying algorithms to derive the timing of phenological events, such as SOS and EOS. We expect many more studies to appear in the future evaluating PhenoCam images beyond the vegetation color indices calculated so far.
  • Citizen Science data from plant identification apps such as Flora Incognita prove to be a long-term source of vegetational data. These images have a timestamp and location information and can thus provide important information about, e.g., flowering periods, similar to herbarium material.

We see that the study of phenology can easily and successfully exploit deep learning methods to speed up traditional gathering and evaluation of information. We, as a research team, are very proud to be a part of that and invite you to play a vital role – in using Flora Incognita to observe the diversity and change of biodiversity around you.

If you have any questions to us regarding our research, don’t hesitate to reach out! You can find Negin Katal on Researchgate and Twitter (@katalnegin), for example.

 

Publication:

Katal, N., Rzanny, M., Mäder, P., & Wäldchen, J. (2022). Deep learning in plant phenological research: A systematic literature review. Frontiers in Plant Science, 13. https://doi.org/10.3389/fpls.2022.805738

Can grasses be identified automatically via smartphone images?

Couch grass or ryegrass? Grasses are considered species, that are difficult to identify. The couch grass (Agropyron repens) is feared and often controlled and in gardens. Ryegrass (Lolium perenne), on the other hand, is the basis of many lawn seed mixtures and is a valuable forage grass. The question is: Can you even differentiate them? In our recently published paper, we investigated whether grasses can be automatically recognized and distinguished despite their great similarity. We wanted to know which perspectives are suitable and whether this would even possible without flowers present.

We studied 31 species using images of the inflorescence, the leaves and the ligule. We found that a combination of different perspectives improves the results. The inflorescence provided the most information. If no flowers are present, pictures of the ligule from the direction of the leaf are best suited to distinguish the grass species. All images were taken with different smartphones.

What do we learn from this experiment for our plant identification app Flora Incognita? Even more difficult groups can be well identified automatically, provided suitable images of the correct plant parts are provieded. These newly gained insights will be incorporated into the further development of our app.

In our experiment, we achieved > 96% accuracy for the 31 species when combining all perspectives. We also gained many, many training images that will significantly improve the reliability of the Flora Incognita app for grasses in the future.

 

Publication:

Rzanny M, Wittich HC, Mäder P, Deggelmann A, Boho D & Wäldchen J (2022) Image-Based Automated Recognition of 31 Poaceae Species: The Most Relevant Perspectives. Front. Plant Sci. 12:804140. https://doi.org/10.3389/fpls.2021.804140

 

Flora Incognita demonstrates high accuracy in Northern Europe

The Flora Incognita project aims to inspire people – to get to know their (botanical) surroundings better, but also to think outside the box and reflect on the possibilities of artificial intelligence, deep learning or biodiversity. A great example of this is what Jaak Pärtel did: As a student project, he investigated the accuracy of the Flora Incognita application in Estonia, Northern Europe and even published a paper about it! With this exceptional project, Jaak won the Estonian National Youth Science Competition! Congratulations, Jaak! Here is a short interview, to share more details.

 

Hello, Jaak, congratulations on your first place in the Estonian National Youth Science Competition. Would you have imagined that happening when you started the project?

I honestly had no idea about what my project would become in a year. However, I was certain from the beginning that I want to do something that would not be “just another student project”. I got very positive reviews for the project in school, so I thought I would give it a try in the national competition. I was really surprised when I heard the results for the competition. Even that was not all, as I have published a scientific article (co-authors Jana Wäldchen and Meelis Pärtel) based on the project’s dataset and will represent my country in the European Union Contest for Young Scientists 2020/2021 this September.

How did you get the idea to do this project?
My two interests were life sciences and technology, so I found a suitable combination of the two. I had heard of plant identification apps but was not sure how  I wanted to have a field works experience and collect an extensive dataset to analyse it statistically.

Can you explain briefly what you investigated and how you went about it?
I investigated the accuracy of the plant identification applications Flora Incognita. I conducted the study in two parts: one with 1500 plant images from a database and second with 1000 observations in Estonian wilderness. I also investigated whether plants with flowers were identified more accurately and how much time automated identification took compared to traditional methods.

What are your main results in the project?
The main result of my project was that both applications reached close to 80% in accuracy in Estonian field conditions, with the correct species among the top five suggestions in circa 90% of the observations. In field conditions, plants with flowers were identified considerably more accurately than ones without them. Automatic identification took one minute compared to over four minutes for manual identification. During my project, I also translated Flora Incognita into my national language – Estonian.

What were you telling bypassers when they saw you documenting flowers?  :-)
I had no such situations, as most of my field works took place in locations with little populace. However, I would have said that I am a researcher collecting a dataset about plant apps. A surprising number of people have at least heard about the apps and would probably understand my mission in the field.

What are your next plans?
As of now I am serving my country in mandatory conscription service but after that I will start my Bachelor’s studies of Biology and nature conservation in University of Tartu. I would like to pursue science as a carreer and use innovative and computational methods in biology.

Publication:

Pärtel, J., Pärtel, M., & Wäldchen, J. (2021). Plant image identification application demonstrates high accuracy in Northern Europe. AoB PLANTS. Volume 13, Issue 4, https://doi.org/10.1093/aobpla/plab050 (Editors’ Choice)
Images: Jaak Pärtel

How smartphones can help detect ecological change

Efficient pollen identification – Interdisciplinary research team combines image-based particle analysis with artificial intelligence

From pollen forecasting, honey analysis and climate-related changes in plant-pollinator interactions, analysing pollen plays an important role in many areas of research. Microscopy is still the gold standard, but it is very time consuming and requires considerable expertise. In cooperation with Technische Universität (TU) Ilmenau, scientists from the Helmholtz Centre for Environmental Research (UFZ) and the German Centre for Integrative Biodiversity Research (iDiv) have now developed a method that allows them to efficiently automate the process of pollen analysis. Their study has been published in the specialist journal New Phytologist.

Pollen is produced in a flower’s stamens and consists of a multitude of minute pollen grains, which contain the plant’s male genetic material necessary for its reproduction. The pollen grains get caught in the tiny hairs of nectar-feeding insects as they brush past and are thus transported from flower to flower. Once there, in the ideal scenario, a pollen grain will cling to the sticky stigma of the same plant species, which may then result in fertilisation. “Although pollinating insects perform this pollen delivery service entirely incidentally, its value is immeasurably high, both ecologically and economically,” says Dr. Susanne Dunker, head of the working group on imaging flow cytometry at the Department for Physiological Diversity at UFZ and iDiv. “Against the background of climate change and the accelerating loss of species, it is particularly important for us to gain a better understanding of these interactions between plants and pollinators.” Pollen analysis is a critical tool in this regard. 

Each species of plant has pollen grains of a characteristic shape, surface structure and size. When it comes to identifying and counting pollen grains – measuring between 10 and 180 micrometres – in a sample, microscopy has long been considered the gold standard. However, working with a microscope requires a great deal of expertise and is very time-consuming. “Although various approaches have already been proposed for the automation of pollen analysis, these methods are either unable to differentiate between closely related species or do not deliver quantitative findings about the number of pollen grains contained in a sample,” continues UFZ biologist Dr. Dunker. Yet it is precisely this information that is critical to many research subjects, such as the interaction between plants and pollinators. 

In their latest study, Susanne Dunker and her team of researchers have developed a novel method for the automation of pollen analysis. To this end they combined the high throughput of imaging flow cytometry – a technique used for particle analysis – with a form of artificial intelligence (AI) known as deep learning to design a highly efficient analysis tool, which makes it possible to both accurately identify the species and quantify the pollen grains contained in a sample. Imaging flow cytometry is a process that is primarily used in the medical field to analyse blood cells but is now also being repurposed for pollen analysis. “A pollen sample for examination is first added to a carrier liquid, which then flows through a channel that becomes increasingly narrow,” says Susanne Dunker, explaining the procedure. “The narrowing of the channel causes the pollen grains to separate and line up as if they are on a string of pearls, so that each one passes through the built-in microscope element on its own and images of up to 2,000 individual pollen grains can be captured per second.” Two normal microscopic images are taken plus ten fluorescence microscopic images per grain of pollen. When excited with light radiated at certain wavelengths by a laser, the pollen grains themselves emit light. “The area of the colour spectrum in which the pollen fluoresces – and at which precise location – is sometimes very specific. This information provides us with additional traits that can help identify the individual plant species,” reports Susanne Dunker. In the deep learning process, an algorithm works in successive steps to abstract the original pixels of an image to a greater and greater degree in order to finally extract the species-specific characteristics. “Microscopic images, fluorescence characteristics and high throughput have never been used in combination for pollen analysis before – this really is an absolute first.” Where the analysis of a relatively straightforward sample takes, for example, four hours under the microscope, the new process takes just 20 minutes. UFZ has therefore applied for a patent for the novel high-throughput analysis method, with its inventor, Susanne Dunker, receiving the UFZ Technology Transfer Award in 2019.

The pollen samples examined in the study came from 35 species of meadow plants, including yarrow, sage, thyme and various species of clover such as white, mountain and red clover. In total, the researchers prepared around 430,000 images, which formed the basis for a data set. In cooperation with TU Ilmenau, this data set was then transferred using deep learning into a highly efficient tool for pollen identification. In subsequent analyses, the researchers tested the accuracy of their new method, comparing unknown pollen samples from the 35 plant species against the data set. “The result was more than satisfactory – the level of accuracy was 96 per cent,” says Susanne Dunker. Even species that are difficult to distinguish from one another, and indeed present experts with a challenge under the microscope, could be reliably identified. The new method is therefore not only extremely fast but also highly precise.

In the future, the new process for automated pollen analysis will play a key role in answering critical research questions about interactions between plants and pollinators. How important are certain pollinators like bees, flies and bumblebees for particular plant species? What would be the consequences of losing a species of pollinating insect or a plant? “We are now able to evaluate pollen samples on a large scale, both qualitatively and- at the same time – quantitatively. We are constantly expanding our pollen data set of insect-pollinated plants for that purpose,” comments Susanne Dunker. She aims to expand the data set to include at least those 500 plant species whose pollen is significant as a food source for honeybees.

Publication:
Susanne Dunker, Elena Motivans, Demetra Rakosy, David Boho, Patrick Mäder, Thomas Hornick, Tiffany M. Knight: Pollen analysis using multispectral imaging flow cytometry and deep learning. New Phytologist https://doi.org/10.1111/nph.16882