The Tech Pulse

October 5, 201711 min read
Tags
  • Https
  • 3blue1brown
  • Learning
  • Neurons
  • Layers
  • Want
  • Weights
  • Biases
  • Sigmoid
  • Training
Share

But what is a neural network? | Deep learning chapter 1

One Sentence Summary

A friendly visual tour of a simple neural network for digit recognition, explaining neurons, weights, biases, and matrix math concepts.

Main Points

  • 784 input neurons represent 28×28 pixel grayscale values.
  • 10 output neurons indicate the digit probabilities.
  • Two hidden layers with 16 neurons each (design choice).
  • Neuron activation is a number between 0 and 1 after a sigmoid.
  • Weights and biases connect layers; ~13,000 parameters total.
  • Operations are nicely expressed as a matrix-vector product.
  • Hidden layers may detect edges and subcomponents like lines or loops.
  • Learning means tuning weights and biases to recognize digits.
  • The network is a single function: 784 inputs to 10 outputs.
  • Modern networks often prefer ReLU over the traditional sigmoid for training.

Takeaways

  • Layered features: pixels -> edges -> patterns -> digits.
  • Inspecting weights helps debugging and architectural intuition.
  • Matrix-based computation enables fast, efficient coding and training.
  • Training is about finding good parameter values, not manual hand-tuning.
  • Activation choice (sigmoid vs. ReLU) significantly affects training dynamics.

SUMMARY

A 3Blue1Brown-style narrator explains neural networks for digit recognition: layers of activations, weights, biases, sigmoid squashing, matrix-vector form, and why structure enables learning.

IDEAS

  • Human vision effortlessly recognizes digits despite pixel differences, highlighting abstraction beyond raw sensory input.
  • The same concept triggers different retinal cells, yet the cortex maps them to one stable idea.
  • Neural networks attempt this mapping: 784 pixel inputs become 10 digit outputs through layered transformations.
  • Neurons are simplified as numbers between zero and one, representing activation strength in each layer.
  • Input activations encode grayscale pixels, while output activations encode confidence for each digit class.
  • Hidden layers are the mystery engine, turning raw pixels into meaningful intermediate representations gradually.
  • Layered structure is motivated by decomposing recognition into abstractions: edges, loops, and digit parts.
  • A hopeful interpretation: second-layer neurons detect small edges; later neurons detect loops and strokes.
  • Recognizing a loop can decompose into detecting many local edges arranged in a circular pattern.
  • Speech understanding similarly builds abstraction layers from audio to phonemes, words, phrases, and thoughts.
  • Each neuron computes a weighted sum of previous activations, encoding which pixel patterns matter most.
  • Weight grids can be visualized as green positive and red negative pixel masks over the input.
  • Negative weights around a region help detect edges by rewarding contrast, not just brightness alone.
  • Weighted sums produce any real value, so a squashing nonlinearity maps them into 0–1 activation range.
  • Sigmoid compresses the real line: negative inputs near zero, positive inputs near one smoothly.
  • Bias shifts activation thresholds, requiring larger evidence before a neuron “fires” meaningfully during inference.
  • Every neuron connects to all previous neurons in vanilla networks, creating dense parameterization throughout.
  • A small hidden layer already yields thousands of parameters, demonstrating rapid combinatorial complexity growth.
  • Learning means finding values for all weights and biases so the network performs correctly on data.
  • Manually tuning 13,000 parameters is impractical, motivating automated optimization methods for learning.
  • Understanding weights and biases provides interpretability, helping debug failures and challenge assumptions effectively.
  • Matrix-vector notation compresses the computation, revealing structure as linear algebra plus nonlinearity.
  • Weight matrices encode connections; bias vectors shift pre-activations; sigmoid applies elementwise afterward.
  • Libraries optimize matrix multiplication heavily, making neural network computation computationally practical at scale.
  • Neurons are better viewed as functions of previous activations, not static containers of numbers.
  • The whole network is a function from 784 inputs to 10 outputs with many parameters inside.
  • Complexity is reassuring: hard tasks require rich functions; simplicity would likely underfit recognition demands.
  • Modern networks often replace sigmoid with ReLU for easier training in deep architectures today.

INSIGHTS

  • Neural networks formalize abstraction by stacking simple functions, turning pixels into concepts through composition.
  • Dense layers encode pattern detectors as learned templates, where weights become feature-selective masks effectively.
  • Bias is not decoration; it is the gatekeeper controlling when evidence becomes meaningful activation.
  • Nonlinearity is the critical ingredient; without it, layers collapse into one linear transformation overall.
  • The hope of interpretability comes from modular features, but learning may exploit unexpected solutions anyway.
  • Linear algebra is the language of deep learning; matrices and vectors describe networks more honestly.
  • “Learning” is parameter search in a high-dimensional space, not mystical emergence from nothing.
  • Brains inspire networks loosely, but the math is far simpler: numbers, sums, and squashing functions.
  • Computation scales because matrix multiplications are optimized hardware primitives, enabling practical training speed.
  • Training ease shapes architecture choices; ReLU won not by biology but by optimization behavior.

QUOTES

  • "This is a three."
  • "your brain has no trouble recognizing it as a three."
  • "sit down and write for me a program that takes in a grid of 28x 28 pixels"
  • "show you what a neural network actually is, assuming no background"
  • "not as a buzzword, but as a piece of math."
  • "neural networks are inspired by the brain."
  • "a thing that holds a number. specifically a number between zero and one."
  • "This number inside the neuron is called its activation."
  • "hidden layers, which for the time being should just be a giant question mark"
  • "The way the network operates, activations in one layer determine the activations of the next layer."
  • "The brightest neuron of that output layer is the network's choice"
  • "In a perfect world, we might hope that each neuron in the second to last layer corresponds with one of these subcomponents."
  • "assign a weight to each one of the connections"
  • "pump this weighted sum into some function that squishes the real number line"
  • "called the sigmoid function, also known as a logistic curve."
  • "That additional number is called the bias."
  • "this network has almost exactly 13,000 total weights and biases."
  • "the entire network is just a function."
  • "so much of machine learning just comes down to having a good grasp of linear algebra."
  • "Relu seems to be much easier to train."

HABITS

  • Pause to notice cognitive feats like digit recognition, using curiosity to motivate technical learning daily.
  • Break complex problems into layers of abstraction, mirroring how representations build in understanding.
  • Visualize parameters as patterns, turning weights into interpretable masks rather than opaque numbers.
  • Use thought experiments to expose impossibility of hand-tuning, clarifying need for learning algorithms.
  • Keep models simple initially, learning “plain vanilla” forms before exploring advanced variants later.
  • Translate computations into compact notation, improving communication and reducing code complexity significantly.
  • Rely on optimized linear algebra libraries instead of hand-rolling matrix multiplication implementations yourself.
  • Treat each neuron as a function, focusing on transformations rather than static components alone.
  • Inspect learned weights and biases to challenge assumptions and diagnose unexpected model behavior.
  • Use intermediate representations to reason about errors, not only final output predictions alone.
  • Learn foundational linear algebra concepts, especially matrix-vector multiplication, to understand deep learning better.
  • Prefer incremental learning: understand structure first, then study training dynamics like gradient descent.
  • Add biases deliberately when modeling thresholds, controlling activation sensitivity and sparsity in networks.
  • Choose activations pragmatically; prefer ReLU for trainability rather than biological realism alone.
  • Use analogy carefully: borrow intuition from brains, but ground reasoning in precise math always.
  • Embrace complexity when needed; difficult tasks require rich functions with many parameters inside.
  • Think in vectors and matrices while coding, aligning mental models with implementation efficiency.
  • Apply elementwise nonlinearities consistently, remembering they prevent collapse into linear transformations.
  • Use classic datasets like handwritten digits to build intuition before tackling messy real-world data.
  • Seek resources and code to experiment hands-on, reinforcing concepts through direct manipulation.

FACTS

  • Input images are represented as 28×28 grayscale pixels, producing 784 input neuron activations.
  • Activations are described as numbers between 0 and 1 representing grayscale intensity values.
  • Output layer has 10 neurons, each representing one digit class from 0 to 9.
  • Example network uses two hidden layers with 16 neurons each as an illustrative structure.
  • Dense connections assign a weight for every connection between layers, forming many parameters quickly.
  • A neuron computes a weighted sum of previous activations, then applies a squashing function.
  • Sigmoid is described as mapping real numbers into the interval between 0 and 1 smoothly.
  • Bias is added to weighted sums, shifting activation thresholds before applying sigmoid function.
  • The example network has about 13,000 weights and biases in total across layers combined.
  • Weight matrices and activation vectors express layer transitions via matrix-vector products plus bias addition.
  • Sigmoid in vector form is applied elementwise to the resulting vector after affine transformation.
  • Many ML libraries optimize matrix multiplication, making these computations fast in practice.
  • The network can be viewed as a function mapping 784 inputs to 10 outputs overall.
  • The narrator notes many neural network variants exist beyond the plain vanilla feedforward form.
  • ReLU stands for rectified linear unit and is max(0, a) where a is pre-activation.
  • Guest Leysa Lee states modern networks often prefer ReLU because it is easier to train.
  • Sigmoid usage is described as more common historically, motivated partly by biological analogies.
  • ReLU was found empirically to work well for very deep neural networks during training.
  • The video separates structure explanation from learning explanation, deferring training to next video.
  • A linear algebra series is referenced, emphasizing matrix multiplication understanding as foundational.

REFERENCES

  • Handwritten digit recognition
  • 28×28 pixel grids
  • Visual cortex
  • Neurons as activations
  • Input layer (784 neurons)
  • Hidden layers
  • Output layer (10 neurons)
  • Edge detection
  • Loops and strokes in digits
  • Speech parsing abstraction analogy
  • Weights
  • Biases
  • Weighted sum
  • Sigmoid / logistic curve
  • Matrix-vector multiplication
  • Weight matrix
  • Bias vector
  • Elementwise nonlinearity
  • Linear algebra series (chapter 3)
  • ReLU (rectified linear unit)
  • Patreon
  • YouTube recommendation algorithm
  • Leysa Lee
  • Amplify Partners
  • Deep learning theoretical side

ONE-SENTENCE TAKEAWAY

Neural networks are layered functions of weighted sums and nonlinearities that learn abstractions from data.

RECOMMENDATIONS

  • Start with plain feedforward networks to grasp structure before exploring convolutions and transformers later.
  • Practice converting layer computations into matrix-vector notation to internalize deep learning mechanics quickly.
  • Visualize weight masks on pixels to build intuition about what features neurons can detect.
  • Experiment with biases to see how thresholds change activation patterns and sparsity behaviors clearly.
  • Compare sigmoid and ReLU activations empirically, observing trainability differences on simple tasks yourself.
  • Learn matrix multiplication deeply; it underpins nearly every neural network forward pass computation.
  • Treat the network as a function, focusing on composition of transformations across layers always.
  • Use MNIST-like digit datasets to prototype and debug models before harder real-world images.
  • Inspect learned weights after training to test whether your “edges then loops” intuition holds.
  • Add intermediate layer visualizations to understand representations, not just final classification accuracy alone.
  • Avoid hand-tuning parameters; instead, learn optimization methods that search parameter space efficiently.
  • Build small networks first, then scale hidden units and layers to see capacity effects clearly.
  • Use modern libraries for linear algebra speedups rather than implementing operations from scratch manually.
  • Ask what abstractions your task contains, then design layer structure that could capture them.
  • Remember nonlinearity is essential; without it, deeper networks reduce to one linear mapping.
  • Use thought experiments to stress-test intuitions, noticing where hopes about interpretability might fail.
  • Read about activation functions and gradient flow to understand why ReLU trains deeper models better.
  • Practice explaining activations, weights, and biases aloud, strengthening conceptual clarity for future learning.
  • Keep analogies grounded: let brain inspiration motivate, but let math govern your understanding.
  • Continue to learning dynamics next, studying backpropagation and gradient descent as the engine of training.

Get New Posts

Follow on your preferred channel for new articles, notes, and experiments.

Related Posts