Chrome Extension
WeChat Mini Program
Use on ChatGLM

Joint Dynamic Data and Model Parallelism for Distributed Training of DNNs over Heterogeneous Infrastructure

IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS(2025)

Cited 0|Views6
Key words
Training,Data models,Pipelines,Optimization,Synchronization,Parallel processing,Scalability,Partitioning algorithms,Costs,Computational modeling,Distributed deep learning system,data parallel,model parallel,data assignment,model partitioning
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined