Samsung Notebook 9 Pen NT940X3M-K716S

모델 정보 출시연월: 2017년 9월 화면크기: 13.3 inch 가격비교 삼성 노트북9 Pen NT940X3M-K716S | 에누리 가격비교 리뷰 동영상 갤럭시 노트의 장점을 합친 최신 2in1 PC 삼성 노트북 9 펜(Pen) [LINK] Samsung NoteBook 9 Pen 15인치, 13.3인치 모델 배터리 구동 테스트 [LINK] 13.3인치: 10시간 50분 S펜을 품은 삼성의 플래그십 노트북, 삼성 노트북9 펜 리뷰 : Samsung […]

Conditional Generative Adversarial Nets | M. Mirza, S. Osindero | 2014

Introduction Conditional version of Generative Adversarial Nets (GAN) where both generator and discriminator are conditioned on some data y (class label or data from some other modality). Architecture Feed y into both the generator and discriminator as additional input layers such that y and input are combined in a joint hidden representation.

Lecture 2: Markov Decision Processes | Reinforcement Learning | David Silver | Course

1. Markov Process / Markov chain 1.1. Markov process A Markov process or Markov chain is a tuple such that is a finite set of states, and is a transition probability matrix. In a  Markov process, the initial state should be given. How do we choose the initial state is not a role of the Markov process. 1.2. State […]

Inception Module | Summary

References Udacity (2016. 6. 6.). Inception Module. YouTube. [LINK] Udacity (2016. 6. 6.). 1×1 Convolutions. YouTube. [LINK] Tommy Mulc (2016. 9. 25.). Inception modules: explained and implemented. [LINK] Szegedy et al. (2015). Going Deeper with Convolutions. CVPR 2015. [arXiv] Summary History The inception module was first introduced in GoogLeNet for ILSVRC’14 competition. Key concept Let a convolutional network decide […]

Graduate School Guide | Summary

References A Survival Guide to a PhD. Andrej Karpathy blog. Sep 7, 2016 [LINK] HOWTO: Get into grad school for science, engineering, math and computer science [LINK] 대학원생을 위한 지극히 개인적인 10가지 조언 [LINK] 논문 읽기 초보자를 위한 Literature survey (문헌 조사) 팁! [LINK] 석사와 박사 [LINK] 내가 대학원에서 생존한 방법 [LINK] 박사 과정을 통해 배운 것들 […]

CS231n: Convolutional Neural Networks for Visual Recognition | Course

Lecture 6 | Training Neural Networks I Sigmoid Problems of the sigmoid activation function Problem 1: Saturated neurons kill the gradients. Problem 2: Sigmoid outputs are not zero-centered. Suppose a given feed-forward neural network has hidden layers and all activation functions are sigmoid. Then, except the first layer, the other layers get only positive inputs. […]

Applying to Ph.D. Programs in Computer Science

  Author: Mor Harchol-Balter (Computer Science Department Carnegie Mellon University) Last updated: 2014 1 Introduction This document is intended for people applying to Ph.D. programs in computer science or related areas. The author is a professor of computer science at CMU, and has been involved in the Ph.D. admissions process at CMU, U.C. Berkeley, and MIT. 2 Do I […]

Neural Networks and Learning Machines. 3rd Ed. Simon O. Haykins. Pearson. 2008

Chapter 8. Principal-Components Analysis 8.1 Introduction 8.2 Principles of Self-Organization Principle 1. Self-Amplification Principle 2. Competition Principle 3. Cooperation Principle 4. Structural Information 8.3 Self-Organized Feature Analysis 8.4 Principal-Components Analysis: Perturbation Theory 8.5 Hebbian-Based maximum Eigenfilter 8.6 Hebbian-Based Principal Components Analysis 8.7 Case Study: Image Coding 8.8 Kernel Principal-Components Analysis 8.9 Basic Issues Involved in […]

Sequence to Sequence Learning with Neural Networks | Summary

References Ilya Sutskever, Oriol Vinyals, Quoc V. Le (2014). “Sequence to Sequence Learning with Neural Networks”. NIPS 2014: 3104-3112. [PDF] Sequence-to-Sequence Models. TensorFlow [LINK] The official tutorial for sequence-to-sequence models. Seq2seq Library (contrib). Tensorflow [LINK] Translation with a Sequence to Sequence Network and Attention. PyTorch. [LINK]