일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | ||
6 | 7 | 8 | 9 | 10 | 11 | 12 |
13 | 14 | 15 | 16 | 17 | 18 | 19 |
20 | 21 | 22 | 23 | 24 | 25 | 26 |
27 | 28 | 29 | 30 | 31 |
Tags
- cifar100-c
- dcgan
- Entropy Minimization
- Pseudo Label
- ConMatch
- semi supervised learnin 가정
- 백준 알고리즘
- shrinkmatch
- remixmatch paper
- Meta Pseudo Labels
- 최린컴퓨터구조
- WGAN
- shrinkmatch paper
- CGAN
- CoMatch
- adamatch paper
- UnderstandingDeepLearning
- tent paper
- SSL
- CycleGAN
- 컴퓨터구조
- mme paper
- dann paper
- 딥러닝손실함수
- simclrv2
- conjugate pseudo label paper
- BYOL
- GAN
- Pix2Pix
- mocov3
Archives
- Today
- Total
목록knowledge distilation (1)
Hello Computer Vision
비전공생의 Distilating the Knowledge in a Neural Network(2015) 리뷰
https://arxiv.org/abs/1503.02531 Distilling the Knowledge in a Neural Network A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome arxiv.org Knowledge distilation(이하 KD)을 공부하게 되면서 태초의 논문을 볼 필요가 ..
Self,Semi-supervised learning
2023. 4. 30. 16:15