일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | ||||||
2 | 3 | 4 | 5 | 6 | 7 | 8 |
9 | 10 | 11 | 12 | 13 | 14 | 15 |
16 | 17 | 18 | 19 | 20 | 21 | 22 |
23 | 24 | 25 | 26 | 27 | 28 |
Tags
- 딥러닝손실함수
- shrinkmatch paper
- Pseudo Label
- CGAN
- simclrv2
- WGAN
- conjugate pseudo label paper
- CycleGAN
- remixmatch paper
- UnderstandingDeepLearning
- shrinkmatch
- SSL
- 최린컴퓨터구조
- Meta Pseudo Labels
- GAN
- mme paper
- mocov3
- semi supervised learnin 가정
- dcgan
- dann paper
- BYOL
- cifar100-c
- tent paper
- Entropy Minimization
- ConMatch
- adamatch paper
- 백준 알고리즘
- Pix2Pix
- 컴퓨터구조
- CoMatch
Archives
- Today
- Total
목록knowledge distilation (1)
Hello Computer Vision
비전공생의 Distilating the Knowledge in a Neural Network(2015) 리뷰
https://arxiv.org/abs/1503.02531 Distilling the Knowledge in a Neural Network A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome arxiv.org Knowledge distilation(이하 KD)을 공부하게 되면서 태초의 논문을 볼 필요가 ..
Self,Semi-supervised learning
2023. 4. 30. 16:15