일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | ||
6 | 7 | 8 | 9 | 10 | 11 | 12 |
13 | 14 | 15 | 16 | 17 | 18 | 19 |
20 | 21 | 22 | 23 | 24 | 25 | 26 |
27 | 28 | 29 | 30 |
Tags
- Entropy Minimization
- conjugate pseudo label paper
- dcgan
- mocov3
- dann paper
- tent paper
- mme paper
- shrinkmatch
- Pseudo Label
- BYOL
- adamatch paper
- simclrv2
- GAN
- ConMatch
- WGAN
- 최린컴퓨터구조
- CoMatch
- 컴퓨터구조
- shrinkmatch paper
- Meta Pseudo Labels
- CGAN
- 백준 알고리즘
- SSL
- remixmatch paper
- CycleGAN
- semi supervised learnin 가정
- cifar100-c
- 딥러닝손실함수
- Pix2Pix
- UnderstandingDeepLearning
Archives
- Today
- Total
목록knowledge distilation (1)
Hello Data

https://arxiv.org/abs/1503.02531 Distilling the Knowledge in a Neural Network A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome arxiv.org Knowledge distilation(이하 KD)을 공부하게 되면서 태초의 논문을 볼 필요가 ..
Self,Semi-supervised learning
2023. 4. 30. 16:15