Chrome Extension
WeChat Mini Program
Use on ChatGLM

Identification and Classification of Exfoliated Graphene Flakes from Microscopy Images Using a Hierarchical Deep Convolutional Neural Network

Engineering Applications of Artificial Intelligence(2023)CCF CSCI 2区SCI 1区

Stevens Inst Technol | Univ Massachusetts Amherst

Cited 13|Views30
Abstract
Identification of exfoliated graphene flakes and classification of the thickness are important in the nanomanufacturing of advanced materials and devices. This paper presents a deep learning method to automatically identify and classify exfoliated graphene flakes on Si/SiO2 substrates from optical microscope images. The presented framework uses a hierarchical deep convolutional neural network that is capable of learning new images while preserving the knowledge from previous images. The deep learning model was trained and used to classify exfoliated graphene flakes into monolayer, bi-layer, tri-layer, four-to-six-layer, seven-to-ten-layer, and bulk categories. Compared with existing machine learning methods, the presented method showed high accuracy and efficiency as well as robustness to the background and resolution of images. The results indicated that the pixel-wise accuracy of the trained deep learning model was 99% in identifying and classifying exfoliated graphene flakes. This research will facilitate scaled-up manufacturing and characterization of graphene for advanced materials and devices.
More
Translated text
Key words
Deep convolutional neural network,Machine learning,Nanomaterials,Optimized adaptive gamma correction,Semantic segmentation,Two-dimensional (2D) material
PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Try using models to generate summary,it takes about 60s
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Related Papers
Isaac Chuang, Michael A Nielsen
2004

被引用49963 | 浏览

Schedin Bf
2005

被引用9267 | 浏览

Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
GPU is busy, summary generation fails
Rerequest