본문 바로가기

Deep Learning7

[논문] Deep Learning, Yann Lecun, 2015 출처: https://www.nature.com/articles/nature14539 This research is about how the deep learning mechanisms actually work. Introduction Deep Learning methods are basically representation learning methods which discover the representation needed for detection or classification. Deep Learning network is composed of multiple non-linear modules. Each modules represents the given image at its own level. .. 2023. 8. 20.
A Holistic View of Perception in Intelligent Vehicles Ghassan AlRegib, and Mohit Prabhushankar. Tutorial on ‘A Holistic View of Perception in Intelligent Vehicles’. IEEE Intelligent Vehicle Symposium (IV 2023), Anchorage, AK, USA, June 4, 2023. pdf: https://bpb-us-w2.wpmucdn.com/sites.gatech.edu/dist/4/3061/files/2023/07/IV2023_Tutorial_Perception_WatermarkCitation-1-compressed.pdf Perception and Autonomy Perception in Autonomous Vehicles includes .. 2023. 7. 30.
[논문] Self-Distilled Self-supervised Representation Learning 참조논문 : https://arxiv.org/abs/2111.12958 Self-Distilled Self-Supervised Representation Learning State-of-the-art frameworks in self-supervised learning have recently shown that fully utilizing transformer-based models can lead to performance boost compared to conventional CNN models. Striving to maximize the mutual information of two views of an imag arxiv.org 요약 : intermediate layer 성능을 높이기 위해 s.. 2023. 3. 14.
Preprocessing Layer 만들기 Class Normalize(nn.Module): def __init__(self, mean, std): super(Normalize, self).__init__() self.mean = torch.tensor(mean, device='cuda') self.std = torch.tensor(std, device='cuda') def forward(self, input): x = input * 255 x = x - self.mean x = x / self.std return x if onnx: model = nn.Sequential( Normalize([mean_r, mean_g, mean_b], [std_r, std_g, std_b]), model ) return model 2023. 1. 7.
[논문] A survey on deep geometry learning: From a representation perspective 출처 : https://link.springer.com/content/pdf/10.1007/s41095-020-0174-8.pdf 사진출처 : https://en.wikipedia.org/wiki/Constructive_solid_geometry Constructive solid geometry - Wikipedia From Wikipedia, the free encyclopedia Jump to navigation Jump to search Creating a complex 3D surface or object by combining primitive objects CSG objects can be represented by binary trees, where leaves represent primit.. 2022. 10. 8.
[개념정리] 딥러닝 성능 용어정리 논문을 보다보면 나오는 하드웨어 성능에 관한 용어 정리를 해보고자 합니다. 1. Throughput 정의 : 단위 시간당 처리하는 연산량 FLOPS : 딥러닝 가속기가 1초당 연산할수있는 floating point 연산량 높은 throughput을 달성할수록 좋은 하드웨어임 가속기 : GPU, FPGA, ASIC GPU V100 : 14 TFlops 1080Ti : 11.3 TFlops T4 : 8.1 TFlops FPGA : programmable chip ASIC : 딥러닝연산에 특화된 processor TPU 메모리 : HBM 대역폭이 가장 큰 메모리 인터페이스 대량 연산을 지원하기 위해 쓰임 2. Latency 정의 : inference에 걸리는 시간 배치크기가 크면 일반적으로 latency가 줄.. 2022. 5. 7.