Pointclip: Point cloud understanding by clip R Zhang, Z Guo, W Zhang, K Li, X Miao, B Cui, Y Qiao, P Gao, H Li Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022 | 164 | 2022 |

Reliable data distillation on graph convolutional network W Zhang, X Miao, Y Shao, J Jiang, L Chen, O Ruas, B Cui Proceedings of the 2020 ACM SIGMOD International Conference on Management of …, 2020 | 57 | 2020 |

Het: Scaling out huge embedding model training via cache-enabled distributed framework X Miao, H Zhang, Y Shi, X Nie, Z Yang, Y Tao, B Cui Proceedings of the VLDB Endowment 15.2 (2021): 312-320., 2021 | 38 | 2021 |

Heterogeneity-aware distributed machine learning training via partial reduce X Miao, X Nie, Y Shao, Z Yang, J Jiang, L Ma, B Cui Proceedings of the 2021 International Conference on Management of Data, 2262 …, 2021 | 35 | 2021 |

Degnn: Improving graph neural networks with graph decomposition X Miao, NM Gürel, W Zhang, Z Han, B Li, W Min, SX Rao, H Ren, Y Shan, ... Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data …, 2021 | 32* | 2021 |

Ps2: Parameter server on spark Z Zhang, B Cui, Y Shao, L Yu, J Jiang, X Miao Proceedings of the 2019 International Conference on Management of Data, 376-388, 2019 | 28 | 2019 |

Calip: Zero-shot enhancement of clip with parameter-free attention Z Guo, R Zhang, L Qiu, X Ma, X Miao, X He, B Cui Proceedings of the AAAI Conference on Artificial Intelligence 37 (1), 746-754, 2023 | 25 | 2023 |

Lasagne: A multi-layer graph convolutional network framework via node-aware deep architecture X Miao, W Zhang, Y Shao, B Cui, L Chen, C Zhang, J Jiang IEEE Transactions on Knowledge and Data Engineering, 2021 | 24 | 2021 |

ROD: reception-aware online distillation for sparse graphs W Zhang, Y Jiang, Y Li, Z Sheng, Y Shen, X Miao, L Wang, Z Yang, B Cui Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data …, 2021 | 22 | 2021 |

Dense-to-sparse gate for mixture-of-experts X Nie, S Cao, X Miao, L Ma, J Xue, Y Miao, Z Yang, Z Yang, CUI Bin | 19 | 2021 |

Memory-aware framework for fast and scalable second-order random walk over billion-edge natural graphs Y Shao, S Huang, Y Li, X Miao, B Cui, L Chen The VLDB Journal 30 (5), 769-797, 2021 | 18 | 2021 |

PSGraph: How Tencent trains extremely large-scale graphs with Spark? J Jiang, P Xiao, L Yu, X Li, J Cheng, X Miao, Z Zhang, B Cui 2020 IEEE 36th International Conference on Data Engineering (ICDE), 1549-1557, 2020 | 18 | 2020 |

Galvatron: Efficient transformer training over multiple gpus using automatic parallelism X Miao, Y Wang, Y Jiang, C Shi, X Nie, H Zhang, B Cui arXiv preprint arXiv:2211.13878, 2022 | 16 | 2022 |

Towards communication-efficient vertical federated learning training via cache-enabled local updates F Fu, X Miao, J Jiang, H Xue, B Cui arXiv preprint arXiv:2207.14628, 2022 | 16 | 2022 |

Memory-aware framework for efficient second-order random walk on large graphs Y Shao, S Huang, X Miao, B Cui, L Chen Proceedings of the 2020 ACM SIGMOD international conference on management of …, 2020 | 15 | 2020 |

HET-GMP: a graph-based system approach to scaling large embedding model training X Miao, Y Shi, H Zhang, X Zhang, X Nie, Z Yang, B Cui Proceedings of the 2022 International Conference on Management of Data, 470-480, 2022 | 13 | 2022 |

Distributed graph neural network training: A survey Y Shao, H Li, X Gu, H Yin, Y Li, X Miao, W Zhang, B Cui, L Chen arXiv preprint arXiv:2211.00216, 2022 | 12 | 2022 |

HetuMoE: An efficient trillion-scale mixture-of-expert distributed training system X Nie, P Zhao, X Miao, T Zhao, B Cui arXiv preprint arXiv:2203.14685, 2022 | 12 | 2022 |

Tsplit: Fine-grained gpu memory management for efficient dnn training via tensor splitting X Nie, X Miao, Z Yang, B Cui 2022 IEEE 38th International Conference on Data Engineering (ICDE), 2615-2628, 2022 | 10 | 2022 |

SpecInfer: Accelerating Generative LLM Serving with Speculative Inference and Token Tree Verification X Miao, G Oliaro, Z Zhang, X Cheng, Z Wang, RYY Wong, Z Chen, ... arXiv preprint arXiv:2305.09781, 2023 | 9 | 2023 |