we are inviting motivated researchers and research engineers with a strong background in mobile deep learning frameworks to join our team in bay area. the team works on developing cutting edge technologies in model compression, mobile deep learning inference engines and beyond.
1.explore cutting edge algorithms in neural network model compression；
2.conduct research and development in deep learning inference engines optimization；
3.design high-efficiency deep learning frameworks for mobile and edge platforms ；(ios/android/linux)；
1.ms/ph.d. in computer science, electrical engineering, or close related fields;
2.familiar with modern neural network architectures;
3.strong experience with at least one deep learning framework, such as tensorflow, pytorch or ncnn;
4.experience in adding new operators or layers for at least one deep learning framework;
5.familiar with deep learning framework optimization techniques, such as gemm, winograd and model compression;
6.experience with low-level optimizations on mobile cpu/gpu, embedded device or npu;
7.solid programming skills with c/c++/python and mobile platform development;
8.strong analytical and problem-solving skills, good written and oral communication skills.