Inception Module | Summary

References Udacity (2016. 6. 6.). Inception Module. YouTube. [LINK] Udacity (2016. 6. 6.). 1×1 Convolutions. YouTube. [LINK] Tommy Mulc (2016. 9. 25.). Inception modules: explained and implemented. [LINK] Szegedy et al. (2015). Going Deeper with Convolutions. CVPR 2015. [arXiv] Summary History The inception module was first introduced in GoogLeNet for ILSVRC’14 competition. Key concept Let a convolutional network decide […]

Batch Normalization | Summary

References Sergey Ioffe, Christian Szegedy (2015). Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. ICML 2015. [ICML][arXiv] Lecture 6: Training Neural Networks, Part 1. CS231n:Convolutional Neural Networks for Visual Recognition. 48:52~1:04:39 [YouTube] Choung young jae (2017. 7. 2.). PR-021: Batch Normalization. Youtube. [YouTube] tf.nn.batch_normalization. Tensorflow. [LINK] Rui Shu (27 DEC 2016). A GENTLE […]

Convolutional Neural Networks | Study

  References L. Fei-Fei, Justin Johnson (Spring 2017)CS231n: Convolutional Neural Networks for Visual Recognition. [LINK] Jefkine (5 September 2016). Backpropagation In Convolutional Neural Networks. [LINK] Convnet: Implementing Convolution Layer with Numpy [LINK] CNN의 역전파(backpropagation) [LINK]

CS231n: Convolutional Neural Networks for Visual Recognition | Course

Lecture 6 | Training Neural Networks I Sigmoid Problems of the sigmoid activation function Problem 1: Saturated neurons kill the gradients. Problem 2: Sigmoid outputs are not zero-centered. Suppose a given feed-forward neural network has hidden layers and all activation functions are sigmoid. Then, except the first layer, the other layers get only positive inputs. […]

Minds and Machines (24.09x) | edX

Brief Summary Course title: Minds and Machines [HOME] Platform: edX Duration: 15 weeks Instructors: Alex Byrne, Chair of Philosophy Section, MIT Ryan Doody, PhD in Philosophy & Linguistics, MIT Short summary of this course An introduction to philosophy of mind, exploring consciousness, reality, AI, and more. The most in-depth philosophy course available online. About this course What is […]

Neural Networks and Learning Machines. 3rd Ed. Simon O. Haykins. Pearson. 2008

Chapter 8. Principal-Components Analysis 8.1 Introduction 8.2 Principles of Self-Organization Principle 1. Self-Amplification Principle 2. Competition Principle 3. Cooperation Principle 4. Structural Information 8.3 Self-Organized Feature Analysis 8.4 Principal-Components Analysis: Perturbation Theory 8.5 Hebbian-Based maximum Eigenfilter 8.6 Hebbian-Based Principal Components Analysis 8.7 Case Study: Image Coding 8.8 Kernel Principal-Components Analysis 8.9 Basic Issues Involved in […]

Computational Neuroscience | Course | MS CogSci

  Range 8.1~8.7 9.1~9.10 10.1~10.14 10.19~10.21 Chapter 8. Principal-Components Analysis 8.1. Introduction Self-organized learning Self-organized learning is a type of unsupervised learning. locality of learning 8.2. Principles of Self-Organization Principle 1: self-amplification The following rule is based on Hebb’s postulate of learning. If two neurons of a synapse are activated simultaneously, then synaptic strength is […]

Sequence to Sequence Learning with Neural Networks | Summary

References Ilya Sutskever, Oriol Vinyals, Quoc V. Le (2014). “Sequence to Sequence Learning with Neural Networks”. NIPS 2014: 3104-3112. [PDF] Sequence-to-Sequence Models. TensorFlow [LINK] The official tutorial for sequence-to-sequence models. Seq2seq Library (contrib). Tensorflow [LINK] Translation with a Sequence to Sequence Network and Attention. PyTorch. [LINK]