Search anything:

Deep Learning Practice Questions

Deep learning list of interview questions.

Binary Tree book by OpenGenus

Open-Source Internship opportunity by OpenGenus for programmers. Apply now.

In this article, we will be going over 50 practice questions related to deep learning.

General Terminology and Concepts

  • What is a neural network?

A neural network, also known as an artificial neural network (ANN), is the foundation of deep learning algorithms. It is inspired by the structure of the human brain. The basic unit in a neural network is a neuron, and neurons are organized into a series of interconnected layers that can send and receive information.

  • What is an input layer?

The input layer is the first layer in a neural network. It takes input values and passes them on to the next layers for processing.

  • What is an output layer?

The output layer is the last layer of the neural network and produces the network's outputs.

  • What are hidden layers?

Hidden layers are the layers between the input and output layers. This is where all the computation and "learning" in a neural network is done. Usually, each layer learns different aspects about the data in order to minimize the error or cost function.

  • What is a weight?

A weight is a parameter in a neural network that controls the strength of the connection between two neurons. When a neuron receives an input, it multiplies this input by the corresponding connection's weight.

  • What is a bias?

A bias is a constant that offsets the result of the inputs multiplied by their corresponding weights. After the inputs are multiplied by the weights of the corresponding connection, the bias value is added. Bias is analogous to the constant in a linear equation..

  • What is an activation function?

After the weighted sum of the inputs is calculated, the activation function transforms it into an output that is sent to the next neurons.

  • Which of the following is not an activation function?

A. Sigmoid B. ReLU C. Dot Product D. tanh

Explanation: Sigmoid, ReLU, and tanh are all activation functions. A dot product is not an activation function.

  • What is the ReLU activation function?

The rectified linear unit (ReLU) activation function is a piecewise function that outputs 0 if the input is negative and the original input if it is positive. Mathematically, this can be represented as the following equation:

ReLU

  • What is the sigmoid activation function?

The sigmoid function is a special form of the logistic function. Its domain includes all real numbers, and its output is always between 0 (exclusive) and 1 (exclusive). Its graph is shaped like an elongated "S", and the equation below represents the sigmoid function.

sigmoid

  • What is the softmax activation function?

Softmax is an activation function that converts a vector of numbers into a vector of probabilities. These probabilities add up to one. The softmax function is commonly used as the activation function in the output layer of neural networks for multiclass classification problems.

  • What is a GPU and why is it used?

A Graphics Processing Unit (GPU) is a specialized processor that was originally designed to speed up graphics rendering. It is commonly used in deep learning since it can perform simultaneous computations and can process more data than a Central Processing Unit (CPU).

  • What is a GAN?

A Generative Adversarial Network (GAN) is a type of generative model, meaning that it can create new data instances resembling samples in your training data. A GAN's structure consists of two parts: a generator that generates new data, and a discriminator that learns to distinguish real data from synthetic data. Essentially, the generator tries to fool the discriminator, and the discriminator tries to not be fooled.

How models learn

  • What does the learning rate of a model represent?

As a model learns, its weights and biases are updated so that it can minimize the cost or error function. Learning rate is a hyperparameter that controls how much the model's weights are changed every time in response to the estimated error.

  • What is gradient descent?

The goal of a neural network, while it is training, is to minimize a cost function. Gradient descent is an iterative optimization algorithm used to find a local minimum of the cost function.

  • What is backpropagation?

Gradient descent needs to calculate derivatives to determine which direction to move in to find a local minimum. Backpropagation is the process of calculating these derivatives. The neural network's error is calculated once the inputs have propagated forward to the output layer. From this output layer, the network error propagates backwards to the input layer, a process called backpropagation. This helps calculate how much the weights of each node need to change in order to reduce the error.

  • Do neural networks require manual feature extraction?

Neural networks do not require you to extract features. They can operate on raw data and learn complex features themselves.

  • Which of the following techniques can be used to prevent overfitting in a neural network?

A. Dropout B. Early Stopping C. Data Augmentation D. All of the Above

Explanation: Dropout, early stopping, and data augmentation are all techniques used to prevent overfitting. Dropout works by randomly disabling neurons and their connections, and this prevents the neural network from relying too much on any single neuron. Early stopping is a method that stops training a neural network when its performance on the validation data stops improving. Data augmentation is a technique used to increase the amount of training data and usually causes the neural network to generalize better, thus reducing overfitting.

Computer Vision and Convolutional Neural Networks

  • What is a convolution?

A convolution is performed on the input data using a kernel, also called a filter, to produce a feature map. A convolution is executed by ‘sliding’ the kernel over the input, and the dot product goes into the feature map. In the below image, there is a sample input image’s pixel values. On the right, there is a kernel that will be applied to the input image.

image and filter

The kernel, or filter, slides across the input image. Each weight is multiplied by the corresponding input pixel, and the sum of these multiplications goes into the feature map.

filter on the image

  • What is a filter and what does it do?

A filter is a matrix of weights that are used to extract features during a convolution. The individual values in the filter are updated during the training process.

  • What is a receptive field?

The area of the original image that the filter covers is called the receptive field .

  • What is a dot product?

A dot product is the sum of the element-wise multiplication between the receptive field of the input and filter. This results in a scalar value, and for this reason, a dot product is sometimes also called a scalar product.

  • What is pooling?

Pooling reduces the dimensionality of feature maps that are generated by convolutions. This reduces the amount of computation that needs to be done but it still retains the significant information, therefore decreasing training time without significant accuracy loss. This is done using a filter that slides across the feature map and represents each individual region with one number. Depending on the type of pooling used, the number used will vary.

  • Explain max pooling.

Max pooling is when the greatest number in each window is used, as shown below.

max pooling

  • Explain min pooling.

Min pooling is when the lowest number in each window is used to represent that region of the feature map.

  • Explain average pooling.

Average pooling represents each region of the feature map by calculating its average value.

  • Which of the following are layers in a Convolutional Neural Network?

A. Convolutional Layers B. Pooling Layers C. Fully-connected Layers D. All of the Above

Explanation: Convolutional layers, pooling layers, and fully-connected layers are all layers used in a neural network. A convolutional layer is the main building block of a CNN. It contains the filters, or kernels, that convolve within the image. The pooling layer typically comes after the convolutional layer. Fully connected layers form the last layers of the neural network, and they come after the final convolutional or pooling layer.

  • Which activation function does a CNN typically use?

A Convolutional Neural Network typically uses the Rectified Linear Unit (ReLU) activation function.

Natural Language Processing (NLP)

  • What are some common applications of natural language processing?

Natural language processing (NLP) is used for text classification (as in spam filtering or intent classification), machine translation (e.g. Google Translate), text summarization, question answering (extracting key information from text to answer a question), and much more.

  • What are word embeddings?

Word embeddings are a way to numerically represent words as real-valued vectors. Word embeddings allow words with similar meanings to have a similar representation. Word embeddings are also able to learn relationships between words. For example, the difference between the word embeddings of the words "woman" and "man" is about the same as the difference between the words "aunt" and "uncle".

  • List some common word embedding techniques.

Word embedding techniques include Google's word2vec, Stanford's GloVe, and TF-IDF.

  • Which Python library offers an implementation of the word2vec model?
  • Which Python library offers an implementation of the GloVe model?

glove_python

  • What is an RNN and why is it used in NLP?

A Recurrent Neural Network (RNN) is a type of neural network used to deal specifically with sequences of data. It works well with sequential data because of its internal memory. For example, let's take a look a simple text classification problem. In natural language, the order of the words and the context surrounding those words are used by RNNs, unlike several other approaches. The RNN takes the output of the first word and feeds it into the output of the second word, and this process continues. Every word uses the output of the previous word to help make a prediction. That is why RNNs are used for sequential data, including text classification, named-entity recognition, and time series data.

  • What is an LSTM?

Long short-term memory (LSTM) networks are a special type of RNN capable of learning long-term dependencies. LSTMs overcome two technical problems - vanishing gradients and exploding gradients, both of which are related to training.

Basics of TensorFlow

  • What is TensorFlow?

TensorFlow is an open source platform for machine learning that allows developers to create, train, test, and deploy machine learning models, including several types of neural networks. It works with Python, Java, and C++., and can be also used in the browser, mobile applications, and even Raspberry Pi.

  • What is a tensor?

A tensor are immutable, multi-dimensional arrays with a uniform type. They are similar to NumPy arrays.

  • What is the rank of a tensor?

The rank of a tensor is the number of indices required to select an individual element of the tensor. You can think about it as the number of dimensions.

  • What is a "rank-0" tensor?

A "rank-0" tensor is one with a single value and no axes.

  • What is another name for a "rank-0" tensor?

A "rank-0" tensor is also called a "scalar".

  • How would you implement a "rank-0" tensor in Python?
  • What is a "rank-1" tensor?

A "rank-1" tensor is like a list of values. It has one axis, or dimension.

  • What is another name for a "rank-1" tensor?

A "rank-1" tensor is also called a "vector".

  • How would you implement a "rank-1" tensor in Python?
  • What is a "rank-2" tensor?

A "rank-2" tensor is a tensor with two dimensions, or axes. You can think of this is a two-dimensional array or a table of values.

  • What is another name for a "rank-2" tensor?

A "rank-2" tensor is commonly referred to as a "matrix".

  • How would you implement a "rank-2" tensor in Python?
  • What does the shape of a tensor signify?

A tensor's shape is the number of elements in each of its dimensions. For example, a tensor with 2 rows and 3 columns has shape [2, 3]

  • How can you determine the number of axes of a tensor in TensorFlow?

You can use the tensor's ndim attribute to output the number of dimensions.

  • What is the size of a tensor?

The size of a tensor is the total number of items it contains. This can be calculated by multiplying each element in the tensor's shape vector.

  • How do you calculate the size of a tensor in TensorFlow?

TensorFlow provides a size() method that can be used as follows:

  • How do you convert a tensor to a NumPy array?

There are two methods that can convert a tensor to a NumPy array - np.array and tensor.numpy :

  • How do you add tensors?

You can use the tf.add method or you can use the + operator:

  • How do you perform element-wise multiplication on tensors?

You can use the tf.multiply method or you can use the * operator:

  • How do you multiply matrices in TensorFlow?

You can use the tf.matmul method or you can use the @ operator:

With this article at OpenGenus, you must have a good practice of basic Deep Learning questions. Enjoy.

OpenGenus IQ: Learn Algorithms, DL, System Design icon

Deep-Learning-Specialization-Coursera

This repo contains the updated version of all the assignments/labs (done by me) of deep learning specialization on coursera by andrew ng. it includes building various deep learning models from scratch and implementing them for object detection, facial recognition, autonomous driving, neural machine translation, trigger word detection, etc., deep learning specialization coursera [updated version 2021].

GitHub Repo

Announcement

[!IMPORTANT] Check our latest paper (accepted in ICDAR’23) on Urdu OCR

UTRNet

This repo contains all of the solved assignments of Coursera’s most famous Deep Learning Specialization of 5 courses offered by deeplearning.ai

Instructor: Prof. Andrew Ng

This Specialization was updated in April 2021 to include developments in deep learning and programming frameworks. One of the most major changes was shifting from Tensorflow 1 to Tensorflow 2. Also, new materials were added. However, Most of the old online repositories still don’t have old codes. This repo contains updated versions of the assignments. Happy Learning :)

Programming Assignments

Course 1: Neural Networks and Deep Learning

  • W2A1 - Logistic Regression with a Neural Network mindset
  • W2A2 - Python Basics with Numpy
  • W3A1 - Planar data classification with one hidden layer
  • W3A1 - Building your Deep Neural Network: Step by Step¶
  • W3A2 - Deep Neural Network for Image Classification: Application

Course 2: Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization

  • W1A1 - Initialization
  • W1A2 - Regularization
  • W1A3 - Gradient Checking
  • W2A1 - Optimization Methods
  • W3A1 - Introduction to TensorFlow

Course 3: Structuring Machine Learning Projects

  • There were no programming assignments in this course. It was completely thoeretical.
  • Here is a link to the course

Course 4: Convolutional Neural Networks

  • W1A1 - Convolutional Model: step by step
  • W1A2 - Convolutional Model: application
  • W2A1 - Residual Networks
  • W2A2 - Transfer Learning with MobileNet
  • W3A1 - Autonomous Driving - Car Detection
  • W3A2 - Image Segmentation - U-net
  • W4A1 - Face Recognition
  • W4A2 - Neural Style transfer

Course 5: Sequence Models

  • W1A1 - Building a Recurrent Neural Network - Step by Step
  • W1A2 - Character level language model - Dinosaurus land
  • W1A3 - Improvise A Jazz Solo with an LSTM Network
  • W2A1 - Operations on word vectors
  • W2A2 - Emojify
  • W3A1 - Neural Machine Translation With Attention
  • W3A2 - Trigger Word Detection
  • W4A1 - Transformer Network
  • W4A2 - Named Entity Recognition - Transformer Application
  • W4A3 - Extractive Question Answering - Transformer Application

I’ve uploaded these solutions here, only for being used as a help by those who get stuck somewhere. It may help them to save some time. I strongly recommend everyone to not directly copy any part of the code (from here or anywhere else) while doing the assignments of this specialization. The assignments are fairly easy and one learns a great deal of things upon doing these. Thanks to the deeplearning.ai team for giving this treasure to us.

Connect with me

Name: Abdur Rahman

Institution: Indian Institute of Technology Delhi

Find me on:

LinkedIn

Deep-Learning-Specialization

Coursera deep learning specialization, neural networks and deep learning.

In this course, you will learn the foundations of deep learning. When you finish this class, you will:

  • Understand the major technology trends driving Deep Learning.
  • Be able to build, train and apply fully connected deep neural networks.
  • Know how to implement efficient (vectorized) neural networks.
  • Understand the key parameters in a neural network’s architecture.

Week 1: Introduction to deep learning

Be able to explain the major trends driving the rise of deep learning, and understand where and how it is applied today.

  • Quiz 1: Introduction to deep learning

Week 2: Neural Networks Basics

Learn to set up a machine learning problem with a neural network mindset. Learn to use vectorization to speed up your models.

  • Quiz 2: Neural Network Basics
  • Programming Assignment: Python Basics With Numpy
  • Programming Assignment: Logistic Regression with a Neural Network mindset

Week 3: Shallow neural networks

Learn to build a neural network with one hidden layer, using forward propagation and backpropagation.

  • Quiz 3: Shallow Neural Networks
  • Programming Assignment: Planar Data Classification with Onehidden Layer

Week 4: Deep Neural Networks

Understand the key computations underlying deep learning, use them to build and train deep neural networks, and apply it to computer vision.

  • Quiz 4: Key concepts on Deep Neural Networks
  • Programming Assignment: Building your Deep Neural Network Step by Step
  • Programming Assignment: Deep Neural Network Application

Course Certificate

Certificate

  • Computer Science and Engineering
  • NOC:Deep Learning- Part 1 (Video) 
  • Co-ordinated by : IIT Ropar
  • Available from : 2018-04-25
  • Intro Video
  • Biological Neuron
  • From Spring to Winter of AI
  • The Deep Revival
  • From Cats to Convolutional Neural Networks
  • Faster, higher, stronger
  • The Curious Case of Sequences
  • Beating humans at their own games (literally)
  • The Madness (2013-)
  • (Need for) Sanity
  • Motivation from Biological Neurons
  • McCulloch Pitts Neuron, Thresholding Logic
  • Perceptrons
  • Error and Error Surfaces
  • Perceptron Learning Algorithm
  • Proof of Convergence of Perceptron Learning Algorithm
  • Deep Learning(CS7015): Linearly Separable Boolean Functions
  • Deep Learning(CS7015): Representation Power of a Network of Perceptrons
  • Deep Learning(CS7015): Sigmoid Neuron
  • Deep Learning(CS7015): A typical Supervised Machine Learning Setup
  • Deep Learning(CS7015): Learning Parameters: (Infeasible) guess work
  • Deep Learning(CS7015): Learning Parameters: Gradient Descent
  • Deep Learning(CS7015): Representation Power of Multilayer Network of Sigmoid Neurons
  • Feedforward Neural Networks (a.k.a multilayered network of neurons)
  • Learning Paramters of Feedforward Neural Networks (Intuition)
  • Output functions and Loss functions
  • Backpropagation (Intuition)
  • Backpropagation: Computing Gradients w.r.t. the Output Units
  • Backpropagation: Computing Gradients w.r.t. Hidden Units
  • Backpropagation: Computing Gradients w.r.t. Parameters
  • Backpropagation: Pseudo code
  • Derivative of the activation function
  • Information content, Entropy & cross entropy
  • Recap: Learning Parameters: Guess Work, Gradient Descent
  • Contours Maps
  • Momentum based Gradient Descent
  • Nesterov Accelerated Gradient Descent
  • Stochastic And Mini-Batch Gradient Descent
  • Tips for Adjusting Learning Rate and Momentum
  • Line Search
  • Gradient Descent with Adaptive Learning Rate
  • Bias Correction in Adam
  • Eigenvalues and Eigenvectors
  • Linear Algebra : Basic Definitions
  • Eigenvalue Decompositon
  • Principal Component Analysis and its Interpretations
  • PCA : Interpretation 2
  • PCA : Interpretation 3
  • PCA : Interpretation 3 (Contd.)
  • PCA : Practical Example
  • Singular Value Decomposition
  • Introduction to Autoncoders
  • Link between PCA and Autoencoders
  • Regularization in autoencoders (Motivation)
  • Denoising Autoencoders
  • Sparse Autoencoders
  • Contractive Autoencoders
  • Bias and Variance
  • Train error vs Test error
  • Train error vs Test error (Recap)
  • True error and Model complexity
  • L2 regularization
  • Dataset augmentation
  • Parameter sharing and tying
  • Adding Noise to the inputs
  • Adding Noise to the outputs
  • Early stopping
  • Ensemble Methods
  • A quick recap of training deep neural networks
  • Unsupervised pre-training
  • Better activation functions
  • Better initialization strategies
  • Batch Normalization
  • One-hot representations of words
  • Distributed Representations of words
  • SVD for learning word representations
  • SVD for learning word representations (Contd.)
  • Continuous bag of words model
  • Skip-gram model
  • Skip-gram model (Contd.)
  • Contrastive estimation
  • Hierarchical softmax
  • GloVe representations
  • Evaluating word representations
  • Relation between SVD and Word2Vec
  • The convolution operation
  • Relation between input size, output size and filter size
  • Convolutional Neural Networks
  • Convolutional Neural Networks (Contd.)
  • CNNs (success stories on ImageNet)
  • CNNs (success stories on ImageNet) (Contd.)
  • Image Classification continued (GoogLeNet and ResNet)
  • Visualizing patches which maximally activate a neuron
  • Visualizing filters of a CNN
  • Occlusion experiments
  • Finding influence of input pixels using backpropagation
  • Guided Backpropagation
  • Optimization over images
  • Create images from embeddings
  • Fooling Deep Convolutional Neural Networks
  • Sequence Learning Problems
  • Recurrent Neural Networks
  • Backpropagation through time
  • The problem of Exploding and Vanishing Gradients
  • Some Gory Details
  • Selective Read, Selective Write, Selective Forget - The Whiteboard Analogy
  • Long Short Term Memory(LSTM) and Gated Recurrent Units(GRUs)
  • How LSTMs avoid the problem of vanishing gradients
  • How LSTMs avoid the problem of vanishing gradients (Contd.)
  • Introduction to Encoder Decoder Models
  • Applications of Encoder Decoder models
  • Attention Mechanism
  • Attention Mechanism (Contd.)
  • Attention over images
  • Hierarchical Attention
  • Live Session 10-04-2021
  • Watch on YouTube
  • Assignments
  • Download Videos
  • Transcripts
Module NameDownload
noc20_cs50_assigment_1
noc20_cs50_assigment_10
noc20_cs50_assigment_11
noc20_cs50_assigment_12
noc20_cs50_assigment_13
noc20_cs50_assigment_2
noc20_cs50_assigment_3
noc20_cs50_assigment_4
noc20_cs50_assigment_5
noc20_cs50_assigment_6
noc20_cs50_assigment_7
noc20_cs50_assigment_8
noc20_cs50_assigment_9
Sl.No Chapter Name MP4 Download
1Biological Neuron
2 From Spring to Winter of AI
3The Deep Revival
4From Cats to Convolutional Neural Networks
5Faster, higher, stronger
6The Curious Case of Sequences
7Beating humans at their own games (literally)
8 The Madness (2013-)
9(Need for) Sanity
10Motivation from Biological Neurons
11McCulloch Pitts Neuron, Thresholding Logic
12Perceptrons
13Error and Error Surfaces
14Perceptron Learning Algorithm
15Proof of Convergence of Perceptron Learning Algorithm
16Deep Learning(CS7015): Linearly Separable Boolean Functions
17Deep Learning(CS7015): Representation Power of a Network of Perceptrons
18Deep Learning(CS7015): Sigmoid Neuron
19Deep Learning(CS7015): A typical Supervised Machine Learning Setup
20Deep Learning(CS7015): Learning Parameters: (Infeasible) guess work
21Deep Learning(CS7015): Learning Parameters: Gradient Descent
22Deep Learning(CS7015): Representation Power of Multilayer Network of Sigmoid Neurons
23Feedforward Neural Networks (a.k.a multilayered network of neurons)
24Learning Paramters of Feedforward Neural Networks (Intuition)
25Output functions and Loss functions
26Backpropagation (Intuition)
27Backpropagation: Computing Gradients w.r.t. the Output Units
28Backpropagation: Computing Gradients w.r.t. Hidden Units
29Backpropagation: Computing Gradients w.r.t. Parameters
30Backpropagation: Pseudo code
31Derivative of the activation function
32Information content, Entropy & cross entropy
33Recap: Learning Parameters: Guess Work, Gradient Descent
34Contours Maps
35Momentum based Gradient Descent
36Nesterov Accelerated Gradient Descent
37Stochastic And Mini-Batch Gradient Descent
38Tips for Adjusting Learning Rate and Momentum
39Line Search
40Gradient Descent with Adaptive Learning Rate
41Bias Correction in Adam
42Eigenvalues and Eigenvectors
43Linear Algebra : Basic Definitions
44Eigenvalue Decompositon
45Principal Component Analysis and its Interpretations
46PCA : Interpretation 2
47PCA : Interpretation 3
48PCA : Interpretation 3 (Contd.)
49PCA : Practical Example
50Singular Value Decomposition
51Introduction to Autoncoders
52Link between PCA and Autoencoders
53Regularization in autoencoders (Motivation)
54Denoising Autoencoders
55Sparse Autoencoders
56Contractive Autoencoders
57 Bias and Variance
58Train error vs Test error
59Train error vs Test error (Recap)
60True error and Model complexity
61L2 regularization
62Dataset augmentation
63Parameter sharing and tying
64Adding Noise to the inputs
65Adding Noise to the outputs
66Early stopping
67Ensemble Methods
68Dropout
69A quick recap of training deep neural networks
70Unsupervised pre-training
71Better activation functions
72Better initialization strategies
73Batch Normalization
74One-hot representations of words
75Distributed Representations of words
76SVD for learning word representations
77 SVD for learning word representations (Contd.)
78Continuous bag of words model
79Skip-gram model
80Skip-gram model (Contd.)
81Contrastive estimation
82Hierarchical softmax
83GloVe representations
84Evaluating word representations
85Relation between SVD and Word2Vec
86The convolution operation
87Relation between input size, output size and filter size
88Convolutional Neural Networks
89Convolutional Neural Networks (Contd.)
90CNNs (success stories on ImageNet)
91CNNs (success stories on ImageNet) (Contd.)
92Image Classification continued (GoogLeNet and ResNet)
93Visualizing patches which maximally activate a neuron
94Visualizing filters of a CNN
95Occlusion experiments
96Finding influence of input pixels using backpropagation
97Guided Backpropagation
98Optimization over images
99Create images from embeddings
100Deep Dream
101Deep Art
102Fooling Deep Convolutional Neural Networks
103Sequence Learning Problems
104Recurrent Neural Networks
105Backpropagation through time
106The problem of Exploding and Vanishing Gradients
107Some Gory Details
108Selective Read, Selective Write, Selective Forget - The Whiteboard Analogy
109Long Short Term Memory(LSTM) and Gated Recurrent Units(GRUs)
110How LSTMs avoid the problem of vanishing gradients
111How LSTMs avoid the problem of vanishing gradients (Contd.)
112Introduction to Encoder Decoder Models
113Applications of Encoder Decoder models
114Attention Mechanism
115Attention Mechanism (Contd.)
116Attention over images
117Hierarchical Attention
Sl.No Chapter Name English
1Biological Neuron
2 From Spring to Winter of AI
3The Deep Revival
4From Cats to Convolutional Neural Networks
5Faster, higher, stronger
6The Curious Case of Sequences
7Beating humans at their own games (literally)
8 The Madness (2013-)
9(Need for) Sanity
10Motivation from Biological Neurons
11McCulloch Pitts Neuron, Thresholding Logic
12Perceptrons
13Error and Error Surfaces
14Perceptron Learning Algorithm
15Proof of Convergence of Perceptron Learning Algorithm
16Deep Learning(CS7015): Linearly Separable Boolean Functions
17Deep Learning(CS7015): Representation Power of a Network of Perceptrons
18Deep Learning(CS7015): Sigmoid Neuron
19Deep Learning(CS7015): A typical Supervised Machine Learning Setup
20Deep Learning(CS7015): Learning Parameters: (Infeasible) guess work
21Deep Learning(CS7015): Learning Parameters: Gradient Descent
22Deep Learning(CS7015): Representation Power of Multilayer Network of Sigmoid Neurons
23Feedforward Neural Networks (a.k.a multilayered network of neurons)
24Learning Paramters of Feedforward Neural Networks (Intuition)
25Output functions and Loss functions
26Backpropagation (Intuition)
27Backpropagation: Computing Gradients w.r.t. the Output Units
28Backpropagation: Computing Gradients w.r.t. Hidden Units
29Backpropagation: Computing Gradients w.r.t. Parameters
30Backpropagation: Pseudo code
31Derivative of the activation function
32Information content, Entropy & cross entropy
33Recap: Learning Parameters: Guess Work, Gradient Descent
34Contours Maps
35Momentum based Gradient Descent
36Nesterov Accelerated Gradient Descent
37Stochastic And Mini-Batch Gradient Descent
38Tips for Adjusting Learning Rate and Momentum
39Line Search
40Gradient Descent with Adaptive Learning Rate
41Bias Correction in Adam
42Eigenvalues and Eigenvectors
43Linear Algebra : Basic Definitions
44Eigenvalue Decompositon
45Principal Component Analysis and its Interpretations
46PCA : Interpretation 2
47PCA : Interpretation 3
48PCA : Interpretation 3 (Contd.)
49PCA : Practical Example
50Singular Value Decomposition
51Introduction to Autoncoders
52Link between PCA and Autoencoders
53Regularization in autoencoders (Motivation)
54Denoising Autoencoders
55Sparse Autoencoders
56Contractive Autoencoders
57 Bias and Variance
58Train error vs Test error
59Train error vs Test error (Recap)
60True error and Model complexity
61L2 regularization
62Dataset augmentation
63Parameter sharing and tying
64Adding Noise to the inputs
65Adding Noise to the outputs
66Early stopping
67Ensemble Methods
68Dropout
69A quick recap of training deep neural networks
70Unsupervised pre-training
71Better activation functions
72Better initialization strategies
73Batch Normalization
74One-hot representations of words
75Distributed Representations of words
76SVD for learning word representations
77 SVD for learning word representations (Contd.)
78Continuous bag of words model
79Skip-gram model
80Skip-gram model (Contd.)
81Contrastive estimation
82Hierarchical softmax
83GloVe representations
84Evaluating word representations
85Relation between SVD and Word2Vec
86The convolution operation
87Relation between input size, output size and filter size
88Convolutional Neural Networks
89Convolutional Neural Networks (Contd.)
90CNNs (success stories on ImageNet)
91CNNs (success stories on ImageNet) (Contd.)
92Image Classification continued (GoogLeNet and ResNet)
93Visualizing patches which maximally activate a neuron
94Visualizing filters of a CNN
95Occlusion experiments
96Finding influence of input pixels using backpropagation
97Guided Backpropagation
98Optimization over images
99Create images from embeddings
100Deep Dream
101Deep Art
102Fooling Deep Convolutional Neural Networks
103Sequence Learning Problems
104Recurrent Neural Networks
105Backpropagation through time
106The problem of Exploding and Vanishing Gradients
107Some Gory Details
108Selective Read, Selective Write, Selective Forget - The Whiteboard Analogy
109Long Short Term Memory(LSTM) and Gated Recurrent Units(GRUs)
110How LSTMs avoid the problem of vanishing gradients
111How LSTMs avoid the problem of vanishing gradients (Contd.)
112Introduction to Encoder Decoder Models
113Applications of Encoder Decoder models
114Attention Mechanism
115Attention Mechanism (Contd.)
116Attention over images
117Hierarchical Attention
Sl.No Language Book link
1English
2BengaliNot Available
3GujaratiNot Available
4HindiNot Available
5KannadaNot Available
6MalayalamNot Available
7MarathiNot Available
8TamilNot Available
9TeluguNot Available
  • Office Hours

deep learning assignment questions

CS230 Deep Learning

Deep Learning is one of the most highly sought after skills in AI. In this course, you will learn the foundations of Deep Learning, understand how to build neural networks, and learn how to lead successful machine learning projects. You will learn about Convolutional networks, RNNs, LSTM, Adam, Dropout, BatchNorm, Xavier/He initialization, and more.

Instructors

deep learning assignment questions

Time and Location

Wednesday 9:30AM-11:20AM Zoom

Course Information

  • This quarter (2023 Spring), CS230 meets for virtual in-class lecture Wednesday 9:30AM-11:20AM on Zoom.
  • All class communication happens on the CS230 Ed forum . For private matters, please make a private note visible only to the course instructors. For longer discussions with TAs and to get help in person, we strongly encourage you to come to office hours.
  • The course content and deadlines for all assignments are listed in our syllabus .
  • For general inquiries, please contact [email protected] .
  • Please DO NOT reach out to the instructors’ emails or individual teaching staff’s emails. Instead, please contact the teaching staff at [email protected] for the fastest response. Because of the size of the course, emails tend to get lost when reaching out to individuals in the teaching team. General inquiries to the mailing list ( [email protected] ) will help us get back to you in a timely manner.
  • If you are interested in auditing the course, fill out this form .

Course Staff

deep learning assignment questions

Course Assistants

deep learning assignment questions

All course announcements take place through the CS230 Ed forum . Please make sure to join!

Class components

CS230 has the following components:

  • In class (virtual) lecture - once a week (hosted on Zoom). You can access lectures by going to the “Zoom” tab of Canvas.
  • Video lectures, programming assignments, and quizzes on Coursera
  • A midterm covering material from the first half of the quarter
  • The final project
  • Weekly TA-led sections

The flipped classroom format

CS230 follows a flipped-classroom format, every week you will have:

  • Virtual lectures on Wednesday 9:30AM-11:20AM: these lectures will be a mix of advanced lectures on a specific subject that hasn’t been treated in depth in the videos or guest lectures from industry experts. You can access these lectures on the Zoom tab on Canvas , and they will also be posted afterwards on Canvas .
  • Two modules from the deeplearning.ai Deep Learning Specialization on Coursera. You will watch videos at home, solve quizzes and programming assignments hosted on online notebooks.
  • TA-led sections on Fridays: Teaching Assistants will teach you hands-on tips and tricks to succeed in your projects, but also theorethical foundations of deep learning.
  • Project meeting with your TA mentor: CS230 is a project-based class. Through personalized guidance, TAs will help you succeed in implementing a successful deep learning project within a quarter.

One module of the deeplearning.ai Deep Learning Specialization on Coursera includes:

  • Lecture videos which are organized in “weeks”. You will have to watch around 10 videos (more or less 10min each) every week.
  • Quizzes (≈10-30min to complete) at the end of every week to assess your understanding of the material.
  • Programming assignments (≈2h per week to complete). The programming assignments will usually lead you to build concrete algorithms, you will get to see your own result after you’ve completed all the code. It’s gonna be fun! For both assignment and quizzes, follow the deadlines on the Syllabus page, not on Coursera.

Prerequisites

Students are expected to have the following background, and are invited to take the Workera technical assessments prior to the class to self-assess themselves prior to taking the class:

  • Knowledge of basic computer science principles and skills, at a level sufficient to write a reasonably non-trivial computer program. This corresponds to a Developing level (or badge) in the “Algorithmic Coding” section on Workera .
  • Familiarity with the probability theory (CS 109 or STATS 116), which students can assess by taking the “Data Science” section on Workera .
  • Familiarity with linear algebra (MATH 51), which students can assess by taking the “Mathematics” section on Workera .

Here’s more information about the class grade:

Below is the breakdown of the class grade:

  • 40%: Final project (broken into proposal, milestone, final report and final video)
  • 25%: Midterm
  • 25%: Programming assignment
  • 8%: Quizzes
  • 2%: Meeting Attendance

Note: For project meetings, every group must meet 3 times throughout the quarter:

  • Before the project proposal deadline to discuss and validate the project idea. This can be with any TA.
  • Before the milestone deadline, with your assigned project TA .
  • Before the final report deadline, again with your assigned project TA.

Every student is allowed to and encouraged to meet more with the TAs, but only the 3 meetings above count towards the final participation grade.

Submitting Assignments

From the Coursera sessions (accessible from the invite you receive by email), you will be able to watch videos, solve quizzes and complete programming assignments. Each quiz and programming assignment can be submitted directly from the session and will be graded by our autograders.

You will submit your project deliverables on Gradescope . You should be added to Gradescope automatically by the end of the first week. If you are not added by the first week of the course, please make a private post on Ed.

Late assignments

Each student will have a total of ten free late (calendar) days to use for programming assignments, quizzes, project proposal and project milestone. Each late day is bound to only one assignment and is per student.

For example , if one quiz and one programming assignment are submitted 3 hours after the deadline, this results in 2 late days being used.

For example , if a group submitted their project proposal 23 hours after the deadline, this results in 1 late day being used per student.

Once these late days are exhausted, any assignments turned in late will be penalized 20% per late day. However, no assignment will be accepted more than three days after its due date, and late days cannot be used for the final project and final presentation. Each 24 hours or part thereof that a homework is late uses up one full late day. Also, note that if you submit an assignment multiple times, only the last one will be taken into account, in which case the number of late days will be calculated based on the last submission.

Students with Documented Disabilities

Students who may need an academic accommodation based on the impact of a disability must initiate the request with the Office of Accessible Education (OAE). Professional staff will evaluate the request with required documentation, recommend reasonable accommodations, and prepare an Accommodation Letter for faculty. Unless the student has a temporary disability, Accommodation letters are issued for the entire academic year. Students should contact the OAE as soon as possible since timely notice is needed to coordinate accommodations. The OAE is located at 563 Salvatierra Walk (phone: 723-1066).

We strongly encourage students to form study groups. Students may discuss and work on programming assignments and quizzes in groups. However, each student must write down the solutions independently, and without referring to written notes from the joint session. In other words, each student must understand the solution well enough in order to reconstruct it by him/herself. In addition, each student should submit his/her own code and mention anyone he/she collaborated with. It is also an honor code violation to copy, refer to, or look at written or code solutions from a previous year, including but not limited to: official solutions from a previous year, solutions posted online, and solutions you or someone else may have written up in a previous year. Furthermore, it is an honor code violation to post your assignment solutions online, such as on a public git repo.

The Stanford Honor Code

The Stanford Honor Code as it pertains to CS courses

swayam-logo

Deep Learning Part 1 (IITM)

--> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> -->

Note: This exam date is subject to change based on seat availability. You can check final exam date on your hall ticket.

Page Visits

Course layout, books and references, instructor bio.

deep learning assignment questions

DOWNLOAD APP

deep learning assignment questions

SWAYAM SUPPORT

Please choose the SWAYAM National Coordinator for support. * :

Tutorial Playlist

Deep learning tutorial for beginners, discover deep learning: ai's game-changing technology.

The Best Introduction to Deep Learning - A Step by Step Guide

Top Deep Learning Applications Used Across Industries

What is Neural Network: Overview, Applications, and Advantages

Neural networks tutorial, top 8 deep learning frameworks you should know in 2024, top 10 deep learning algorithms you should know in 2024, an introduction to deep learning with python, what is tensorflow: deep learning libraries and program elements explained, how to install tensorflow on ubuntu, what is tensorflow 2.0 the best guide to understand tensorflow, tensorflow tutorial for beginners: your gateway to building machine learning models, convolutional neural network tutorial, recurrent neural network (rnn) tutorial for beginners, the best introduction to what gans are, what is keras the best introductory guide to keras, frequently asked deep learning interview questions and answers, the ultimate guide to building powerful keras image classification models, what is ethernet a look into the basics of network communication, the best guide to understand everything about the google summer of code, the best guide to understand graphql, ssl handshake: from zero to hero, how to download and install junit, knn in python: learn how to leverage knn algorithms, how to become a machine learning engineer, intro to deep belief network (dbn) in deep learning, top deep learning interview questions and answers for 2024.

Lesson 17 of 26 By Kartik Menon

Frequently asked Deep Learning Interview Questions and Answers

Table of Contents

The demand for Deep Learning has grown over the years and its applications are being used in every business sector. Companies are now on the lookout for skilled professionals who can use deep learning and machine learning techniques to build models that can mimic human behavior. As per indeed, the average salary for a deep learning engineer in the United States is $133,580 per annum. In this tutorial, you will learn the top 45 Deep Learning interview questions that are frequently asked.

Build deep learning models in TensorFlow and learn the TensorFlow open-source framework with the Deep Learning Course (with Keras &TensorFlow) . Enroll now!

Deep Learning Interview Questions and Answers

Check out some of the frequently asked deep learning interview questions below:

1. What is Deep Learning?

If you are going for a deep learning interview, you definitely know what exactly deep learning is. However, with this question the interviewee expects you to give an in-detail answer, with an example.  Deep Learning involves taking large volumes of structured or unstructured data and using complex algorithms to train neural networks. It performs complex operations to extract hidden patterns and features (for instance, distinguishing the image of a cat from that of a dog).

Deep Learning

Your AI/ML Career is Just Around The Corner!

Your AI/ML Career is Just Around The Corner!

2. What is a Neural Network?

Neural Networks replicate the way humans learn, inspired by how the neurons in our brains fire, only much simpler.

Neural Network

The most common Neural Networks consist of three network layers:

  • An input layer
  • A hidden layer (this is the most important layer where feature extraction takes place, and adjustments are made to train faster and function better)
  • An output layer

Each sheet contains neurons called “nodes,” performing various operations. Neural Networks are used in deep learning algorithms like CNN, RNN, GAN, etc.

3. What Is a Multi-layer Perceptron(MLP)?

As in Neural Networks, MLPs have an input layer, a hidden layer, and an output layer. It has the same structure as a single layer perceptron with one or more hidden layers. A single layer perceptron can classify only linear separable classes with binary output (0,1), but MLP can classify nonlinear classes.

Except for the input layer, each node in the other layers uses a nonlinear activation function. This means the input layers, the data coming in, and the activation function is based upon all nodes and weights being added together, producing the output. MLP uses a supervised learning method called “backpropagation.” In backpropagation, the neural network calculates the error with the help of cost function. It propagates this error backward from where it came (adjusts the weights to train the model more accurately).

4. What Is Data Normalization, and Why Do We Need It?

The process of standardizing and reforming data is called “Data Normalization.” It’s a pre-processing step to eliminate data redundancy. Often, data comes in, and you get the same information in different formats. In these cases, you should rescale values to fit into a particular range, achieving better convergence.

5. What is the Boltzmann Machine?

One of the most basic Deep Learning models is a Boltzmann Machine, resembling a simplified version of the Multi-Layer Perceptron. This model features a visible input layer and a hidden layer -- just a two-layer neural net that makes stochastic decisions as to whether a neuron should be on or off. Nodes are connected across layers, but no two nodes of the same layer are connected.

6. What Is the Role of Activation Functions in a Neural Network?

At the most basic level, an activation function decides whether a neuron should be fired or not. It accepts the weighted sum of the inputs and bias as input to any activation function. Step function, Sigmoid, ReLU, Tanh, and Softmax are examples of activation functions.

Role of Activation Functions in a Neural Network

Master Gen AI Strategies for Businesses with

Master Gen AI Strategies for Businesses with

7. What Is the Cost Function?

Also referred to as “loss” or “error,” cost function is a measure to evaluate how good your model’s performance is. It’s used to compute the error of the output layer during backpropagation. We push that error backward through the neural network and use that during the different training functions.

What is the Cost function?

8. What Is Gradient Descent?

Gradient Descent is an optimal algorithm to minimize the cost function or to minimize an error. The aim is to find the local-global minima of a function. This determines the direction the model should take to reduce the error.

Gradient Descent

9. What Do You Understand by Backpropagation?

This is one of the most frequently asked deep learning interview questions. Backpropagation is a technique to improve the performance of the network. It backpropagates the error and updates the weights to reduce the error.

What do you understand by Backpropogation?

10. What Is the Difference Between a Feedforward Neural Network and Recurrent Neural Network?

In this deep learning interview question, the interviewee expects you to give a detailed answer.

A Feedforward Neural Network signals travel in one direction from input to output. There are no feedback loops; the network considers only the current input. It cannot memorize previous inputs (e.g., CNN ).

A Recurrent Neural Network’s signals travel in both directions, creating a looped network. It considers the current input with the previously received inputs for generating the output of a layer and can memorize past data due to its internal memory.

Recurrent Neural Network

11. What Are the Applications of a Recurrent Neural Network (RNN)?

The RNN can be used for sentiment analysis, text mining, and image captioning. Recurrent Neural Networks can also address time series problems such as predicting the prices of stocks in a month or quarter.

12. What Are the Softmax and ReLU Functions?

Softmax is an activation function that generates the output between zero and one. It divides each output, such that the total sum of the outputs is equal to one. Softmax is often used for output layers.

Softmax function

ReLU (or Rectified Linear Unit) is the most widely used activation function. It gives an output of X if X is positive and zeros otherwise. ReLU is often used for hidden layers.

Relu Function

13. What Are Hyperparameters?

This is another frequently asked deep learning interview question. With neural networks, you’re usually working with hyperparameters once the data is formatted correctly. A hyperparameter is a parameter whose value is set before the learning process begins. It determines how a network is trained and the structure of the network (such as the number of hidden units, the learning rate, epochs, etc.).

Hyperparameters

14. What Will Happen If the Learning Rate Is Set Too Low or Too High?

When your learning rate is too low, training of the model will progress very slowly as we are making minimal updates to the weights. It will take many updates before reaching the minimum point.

If the learning rate is set too high, this causes undesirable divergent behavior to the loss function due to drastic updates in weights. It may fail to converge (model can give a good output) or even diverge (data is too chaotic for the network to train).

Learning rate is set too low or too high

15. What Is Dropout and Batch Normalization?

Dropout is a technique of dropping out hidden and visible units of a network randomly to prevent overfitting of data (typically dropping 20 percent of the nodes). It doubles the number of iterations needed to converge the network.

Dropout and Batch Normalization

Batch normalization is the technique to improve the performance and stability of neural networks by normalizing the inputs in every layer so that they have mean output activation of zero and standard deviation of one.

The next step on this top Deep Learning interview questions and answers blog will be to discuss intermediate questions.

16. What Is the Difference Between Batch Gradient Descent and Stochastic Gradient Descent?

Course Status : Completed
Course Type : Elective
Duration : 12 weeks
Category :
Credit Points : 3
Undergraduate
Start Date : 29 Jul 2019
End Date : 18 Oct 2019
Exam Date : 16 Nov 2019 IST

Batch Gradient Descent

Stochastic Gradient Descent

The batch gradient computes the gradient using the entire dataset.

It takes time to converge because the volume of data is huge, and weights update slowly.

The stochastic gradient computes the gradient using a single sample.

It converges much faster than the batch gradient because it updates weight more frequently.

Become a Data Scientist with Hands-on Training!

Become a Data Scientist with Hands-on Training!

17. What is Overfitting and Underfitting, and How to Combat Them?

Overfitting occurs when the model learns the details and noise in the training data to the degree that it adversely impacts the execution of the model on new information. It is more likely to occur with nonlinear models that have more flexibility when learning a target function. An example would be if a model is looking at cars and trucks, but only recognizes trucks that have a specific box shape. It might not be able to notice a flatbed truck because there's only a particular kind of truck it saw in training. The model performs well on training data, but not in the real world.

Underfitting alludes to a model that is neither well-trained on data nor can generalize to new information. This usually happens when there is less and incorrect data to train a model. Underfitting has both poor performance and accuracy.

To combat overfitting and underfitting, you can resample the data to estimate the model accuracy (k-fold cross-validation) and by having a validation dataset to evaluate the model.

18. How Are Weights Initialized in a Network?

There are two methods here: we can either initialize the weights to zero or assign them randomly.

Initializing all weights to 0: This makes your model similar to a linear model. All the neurons and every layer perform the same operation, giving the same output and making the deep net useless.

Initializing all weights randomly: Here, the weights are assigned randomly by initializing them very close to 0. It gives better accuracy to the model since every neuron performs different computations. This is the most commonly used method.

19. What Are the Different Layers on CNN?

There are four layers in CNN:

  • Convolutional Layer -  the layer that performs a convolutional operation, creating several smaller picture windows to go over the data.
  • ReLU Layer - it brings non-linearity to the network and converts all the negative pixels to zero. The output is a rectified feature map.
  • Pooling Layer - pooling is a down-sampling operation that reduces the dimensionality of the feature map.
  • Fully Connected Layer - this layer recognizes and classifies the objects in the image.
Master the deep learning concepts and TensorFlow open-source framework with the Deep Learning Course with TensorFlow Certification . Check out the course preview today. 

20. What is Pooling on CNN, and How Does It Work?

Pooling is used to reduce the spatial dimensions of a CNN. It performs down-sampling operations to reduce the dimensionality and creates a pooled feature map by sliding a filter matrix over the input matrix.

Pooling in CNN

21. How Does an LSTM Network Work?

Long-Short-Term Memory (LSTM) is a special kind of recurrent neural network capable of learning long-term dependencies, remembering information for long periods as its default behavior. There are three steps in an LSTM network:

  • Step 1: The network decides what to forget and what to remember.
  • Step 2: It selectively updates cell state values.
  • Step 3: The network decides what part of the current state makes it to the output.

Working of LSTM network

22. What Are Vanishing and Exploding Gradients?

While training an RNN, your slope can become either too small or too large; this makes the training difficult. When the slope is too small, the problem is known as a “Vanishing Gradient.” When the slope tends to grow exponentially instead of decaying, it’s referred to as an “Exploding Gradient.” Gradient problems lead to long training times, poor performance, and low accuracy.

Vanishing and Exploding Gradients

23. What Is the Difference Between Epoch, Batch, and Iteration in Deep Learning?

  • Epoch - Represents one iteration over the entire dataset (everything put into the training model).
  • Batch - Refers to when we cannot pass the entire dataset into the neural network at once, so we divide the dataset into several batches.
  • Iteration - if we have 10,000 images as data and a batch size of 200. then an epoch should run 50 iterations (10,000 divided by 50).

24. Why is Tensorflow the Most Preferred Library in Deep Learning?

Tensorflow provides both C++ and Python APIs, making it easier to work on and has a faster compilation time compared to other Deep Learning libraries like Keras and Torch . Tensorflow supports both CPU and GPU computing devices.

25. What Do You Mean by Tensor in Tensorflow?

This is another most frequently asked deep learning interview question. A tensor is a mathematical object represented as arrays of higher dimensions. These arrays of data with different dimensions and ranks fed as input to the neural network are called “Tensors.”

What Do You Mean by Tensor in Tensorflow?

26. What Are the Programming Elements in Tensorflow?

Constants - Constants are parameters whose value does not change. To define a constant we use  tf.constant() command. For example:

a = tf.constant(2.0,tf.float32)

b = tf.constant(3.0)

Print(a, b)

Variables - Variables allow us to add new trainable parameters to graph. To define a variable, we use the tf.Variable() command and initialize them before running the graph in a session. An example:

W = tf.Variable([.3].dtype=tf.float32)

b = tf.Variable([-.3].dtype=tf.float32)

Placeholders - these allow us to feed data to a tensorflow model from outside a model. It permits a value to be assigned later. To define a placeholder, we use the tf.placeholder() command. An example:

a = tf.placeholder (tf.float32)

with tf.Session() as sess:

result = sess.run(b,feed_dict={a:3.0})

print result

Sessions - a session is run to evaluate the nodes. This is called the “Tensorflow runtime.” For example:

a = tf.constant(2.0)

b = tf.constant(4.0)

# Launch Session

Sess = tf.Session()

# Evaluate the tensor c

print(sess.run(c))

27. Explain a Computational Graph.

Everything in a tensorflow is based on creating a computational graph. It has a network of nodes where each node operates, Nodes represent mathematical operations, and edges represent tensors. Since data flows in the form of a graph, it is also called a “DataFlow Graph.”

28. Explain Generative Adversarial Network.

Suppose there is a wine shop purchasing wine from dealers, which they resell later. But some dealers sell fake wine. In this case, the shop owner should be able to distinguish between fake and authentic wine.

The forger will try different techniques to sell fake wine and make sure specific techniques go past the shop owner’s check. The shop owner would probably get some feedback from wine experts that some of the wine is not original. The owner would have to improve how he determines whether a wine is fake or authentic.

The forger’s goal is to create wines that are indistinguishable from the authentic ones while the shop owner intends to tell if the wine is real or not accurately.

Main components of Generator and Discriminator

Let us understand this example with the help of an image shown above.

There is a noise vector coming into the forger who is generating fake wine.

Here the forger acts as a Generator.

The shop owner acts as a Discriminator.

The Discriminator gets two inputs; one is the fake wine, while the other is the real authentic wine. The shop owner has to figure out whether it is real or fake.

So, there are two primary components of Generative Adversarial Network (GAN) named:

  • Discriminator

The generator is a CNN that keeps keys producing images and is closer in appearance to the real images while the discriminator tries to determine the difference between real and fake images The ultimate aim is to make the discriminator learn to identify real and fake images.

29. What Is an Auto-encoder?

What Is an Auto-encoder?

This Neural Network has three layers in which the input neurons are equal to the output neurons. The network's target outside is the same as the input. It uses dimensionality reduction to restructure the input. It works by compressing the image input to a latent space representation then reconstructing the output from this representation.

30. What Is Bagging and Boosting?

Bagging and Boosting are ensemble techniques to train multiple models using the same learning algorithm and then taking a call.

What is Bagging?

With Bagging, we take a dataset and split it into training data and test data. Then we randomly select data to place into the bags and train the model separately.

What is Boosting?

With Boosting, the emphasis is on selecting data points which give wrong output to improve the accuracy.

The following are some of the most important advanced deep learning interview questions that you should know!

31. What is the significance of using the Fourier transform in Deep Learning tasks?

The Fourier transform function efficiently analyzes, maintains, and manages large datasets. You can use it to generate real-time array data that is helpful for processing multiple signals.

32. What do you understand by transfer learning? Name a few commonly used transfer learning models.

Transfer learning is the process of transferring the learning from a model to another model without having to train it from scratch. It takes critical parts of a pre-trained model and applies them to solve new but similar machine learning problems.

Some of the popular transfer learning models are:

  • Inception V3

33. What is the difference between SAME and VALID padding in Tensorflow?

Using the Tensorflow library, tf.nn.max_pool performs the max-pooling operation. Tf.nn.max_pool has a padding argument that takes 2 values - SAME or VALID.

With padding == “SAME” ensures that the filter is applied to all the elements of the input.

The input image gets fully covered by the filter and specified stride. The padding type is named SAME as the output size is the same as the input size (when stride=1).

With padding == “VALID” implies there is no padding in the input image. The filter window always stays inside the input image. It assumes that all the dimensions are valid so that the input image gets fully covered by a filter and the stride defined by you.

34. What are some of the uses of Autoencoders in Deep Learning?

  • Autoencoders are used to convert black and white images into colored images.
  • Autoencoder helps to extract features and hidden patterns in the data.
  • It is also used to reduce the dimensionality of data.
  • It can also be used to remove noises from images.

35. What is the Swish Function?

Swish is an activation function proposed by Google which is an alternative to the ReLU activation function. 

It is represented as: f(x) = x * sigmoid(x).

The Swish function works better than ReLU for a variety of deeper models. 

The derivative of Swist can be written as: y’ = y + sigmoid(x) * (1 - y) 

36. What are the reasons for mini-batch gradient being so useful?

  • Mini-batch gradient is highly efficient compared to stochastic gradient descent.
  • It lets you attain generalization by finding the flat minima.
  • Mini-batch gradient helps avoid local minima to allow gradient approximation for the whole dataset.

37. What do you understand by Leaky ReLU activation function?

Leaky ReLU is an advanced version of the ReLU activation function. In general, the ReLU function defines the gradient to be 0 when all the values of inputs are less than zero. This deactivates the neurons. To overcome this problem, Leaky ReLU activation functions are used. It has a very small slope for negative values instead of a flat slope.

38. What is Data Augmentation in Deep Learning?

Data Augmentation is the process of creating new data by enhancing the size and quality of training datasets to ensure better models can be built using them. There are different techniques to augment data such as numerical data augmentation, image augmentation, GAN-based augmentation, and text augmentation.

39. Explain the Adam optimization algorithm.

Adaptive Moment Estimation or Adam optimization is an extension to the stochastic gradient descent. This algorithm is useful when working with complex problems involving vast amounts of data or parameters. It needs less memory and is efficient. 

Adam optimization algorithm is a combination of two gradient descent methodologies - 

Momentum and Root Mean Square Propagation.

40. Why is a convolutional neural network preferred over a dense neural network for an image classification task?

  • The number of parameters in a convolutional neural network is much more diminutive than that of a Dense Neural Network. Hence, a CNN is less likely to overfit.
  • CNN allows you to look at the weights of a filter and visualize what the network learned. So, this gives a better understanding of the model.
  • CNN trains models in a hierarchical way, i.e., it learns the patterns by explaining complex patterns using simpler ones.

41. Which strategy does not prevent a model from over-fitting to the training data?

  • Data augmentation
  • Early stopping

Answer: b) Pooling - It’s a layer in CNN that performs a downsampling operation.

42. Explain two ways to deal with the vanishing gradient problem in a deep neural network.

  • Use the ReLU activation function instead of the sigmoid function
  • Initialize neural networks using Xavier initialization that works with tanh activation.

43. Why is a deep neural network better than a shallow neural network?

Both deep and shallow neural networks can approximate the values of a function. But the deep neural network is more efficient as it learns something new in every layer. A shallow neural network has only one hidden layer. But a deep neural network has several hidden layers that create a deeper representation and computation capability.

44. What is the need to add randomness in the weight initialization process?

If you set the weights to zero, then every neuron at each layer will produce the same result and the same gradient value during backpropagation. So, the neural network won’t be able to learn the function as there is no asymmetry between the neurons. Hence, randomness to the weight initialization process is crucial.

45. How can you train hyperparameters in a neural network?

Hyperparameters in a neural network can be trained using four components:

Batch size: Indicates the size of the input data.

Epochs: Denotes the number of times the training data is visible to the neural network to train. 

Momentum: Used to get an idea of the next steps that occur with the data being executed.

Learning rate: Represents the time required for the network to update the parameters and learn.

What’s Next For You?

The above questions will help you get an understanding of the different theoretical and conceptual questions asked in Deep Learning interviews. The set of questions will give you the confidence to ace deep learning and machine learning interviews.

Do you want to grow your career in Deep Learning? Simplilearn’s AI and Ml course will help you reach the interview stage as you’ll possess skills that many people in the market do not. Certification will help convince employers that you have the right skills and expertise for a job, making you a valuable candidate.

If you have any questions for us, please feel free to contact us or put them in the comments section. We’ll happy to answer them.

Find our Post Graduate Program in AI and Machine Learning Online Bootcamp in top cities:

NameDatePlace
Cohort starts on 28th Sep 2024,
Weekend batch
Your City
Cohort starts on 8th Oct 2024,
Weekend batch
Your City
Cohort starts on 11th Oct 2024,
Weekend batch
Your City

About the Author

Kartik Menon

Kartik is an experienced content strategist and an accomplished technology marketing specialist passionate about designing engaging user experiences with integrated marketing and communication solutions.

Recommended Programs

Post Graduate Program in AI and Machine Learning

Caltech AI & Machine Learning Bootcamp

Caltech Post Graduate Program in AI and Machine Learning

*Lifetime access to high-quality, self-paced e-learning content.

Recommended Resources

Top 45 Machine Learning Interview Questions and Answers for 2024

Top 45 Machine Learning Interview Questions and Answers for 2024

Deep Learning Interview Guide

Deep Learning Interview Guide

Machine Learning vs. Deep Learning: 5 Major Differences You Need to Know

Machine Learning vs. Deep Learning: 5 Major Differences You Need to Know

Top Deep Learning Applications Used Across Industries

Machine Learning Interview Guide

  • PMP, PMI, PMBOK, CAPM, PgMP, PfMP, ACP, PBA, RMP, SP, and OPM3 are registered marks of the Project Management Institute, Inc.
  • Data Science
  • Data Analysis
  • Data Visualization
  • Machine Learning
  • Deep Learning
  • Computer Vision
  • Artificial Intelligence
  • AI ML DS Interview Series
  • AI ML DS Projects series
  • Data Engineering
  • Web Scrapping

Zero Padding in Deep Learning and Signal Processing

Zero padding is a technique commonly used in digital signal processing, machine learning, deep learning, and other computational domains to standardize data dimensions, ensure optimal performance, or preserve the original structure of input data. Zero padding involves adding extra zeros to the input data, matrix, or signal, ensuring that the data has a specific shape or size that is suitable for further processing.

In this article, we will explore the various applications of zero padding, its role in different fields, and how it impacts the efficiency and accuracy of computational models.

Table of Content

Understanding Zero Padding

Zero padding in deep learning, 1. zero padding in convolutional neural networks (cnns), 2. zero padding in recurrent neural networks (rnns), zero padding in signal processing: fast fourier transform (fft), drawbacks of zero padding, alternatives to zero padding.

In its simplest form, zero padding means adding zeros to a data array or matrix, either to its edges or at specific positions. The goal is to modify the dimensions of the data without introducing any additional meaningful information. For instance, zero padding can be used to resize an image or signal, making it conform to the desired input size for a neural network.

Here’s an example to visualize zero padding in a 2D matrix (used for image processing):

Without Zero Padding:

\begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \\ \end{bmatrix}

With Zero Padding (padding size = 1):

\begin{bmatrix} 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 2 & 3 & 0 \\ 0 & 4 & 5 & 6 & 0 \\ 0 & 7 & 8 & 9 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ \end{bmatrix}​

In this example, zeros are added around the original matrix, increasing its size from 3 \times 3 to 5 \times 5 .

In CNNs , zero padding is commonly used during the convolution operation to maintain the spatial dimensions of the input data, such as images. Convolution operations often reduce the size of feature maps because the filters are smaller than the input. Zero padding helps prevent this size reduction by adding zeros around the edges of the input image.

How Zero Padding Works in CNNs?

The main types of padding used in CNNs are:

  • Same Padding : Zero padding is added such that the output size is the same as the input size.
  • Valid Padding : No padding is added, which means the output size will be smaller than the input size.

Formula for Output Size with Zero Padding in CNNs

The output size after a convolution operation with zero padding can be calculated as:

\text{Output Size} = \frac{\left( \text{Input Size} + 2 \times \text{Padding} – \text{Kernel Size} \right)}{\text{Stride}} + 1

  • Input Size : Height or width of the input image.
  • Padding : Number of zeros added around the input.
  • Kernel Size : Size of the convolution filter.
  • Stride : The number of steps the filter moves over the input.

For an image of size 5 \times 5 , kernel size 3 \times 3 , padding 1, and stride 1, the output size would be:

\text{Output Size} = \frac{(5 + 2 \times 1 – 3)}{1} + 1 = 5

Thus, the output size remains 5 \times 5 , ensuring that the dimensions are preserved throughout the convolution layers.

Python Implementation : Zero Padding in CNNs

The output shape (1, 28, 28, 32) can be explained as follows:

  • 1 : The batch size, meaning there is 1 image being processed.
  • 28, 28 : The height and width of the image remain 28×28, which is the same as the input size. This is because we applied zero padding with padding='same' , ensuring that the spatial dimensions (height and width) are preserved after the convolution operation.
  • 32 : The number of filters used in the convolution layer. In this case, we applied 32 filters, so the output has 32 channels (feature maps).

Benefits of Zero Padding in CNNs

  • Preserving Dimensions : Prevents shrinking of the output feature maps, which is crucial for deeper networks.
  • Boundary Feature Detection : Enables the detection of features near the edges of images, ensuring no data loss at the boundaries.

RNNs are used to process sequential data, such as time series or text. The lengths of sequences often vary, creating challenges for batch processing. Zero padding helps ensure uniform sequence lengths by adding zeros to shorter sequences, allowing efficient batch processing while preserving the temporal order.

Zero Padding in RNNs with Masking

To prevent padded zeros from affecting the learning process, masking is applied. Masking tells the RNN to ignore the padded zeros during training.

Implementation of Zero Padding in RNNs

Here’s an example Python code demonstrating padding in a Recurrent Neural Network (RNN) for processing multiple sentences with different word lengths. We’ll use the Keras library with TensorFlow backend, where padding is applied to make all sentences the same length before feeding them into the RNN.

The RNN processes 3 padded sequences, each of length 7, and returns a 16-dimensional output for each sequence.

Benefits of Zero Padding in RNNs

  • Efficient Batch Processing : Allows sequences of different lengths to be processed in parallel.
  • Uniform Input Shape : Provides a consistent input size, which is necessary for model training.

In signal processing, zero padding is often applied before performing a Fast Fourier Transform (FFT) . The FFT converts time-domain signals into the frequency domain, and zero padding increases the length of the signal, improving the frequency resolution.

By adding zeros to the end of a signal, the FFT result contains more points, which means more accurate frequency information. The zero-padded signal does not contain new frequency information, but it gives the FFT a finer grid for analysis, making it easier to interpret.

Formula for Zero Padding in FFT

The length of the zero-padded signal is calculated as:

N_{\text{padded}} = N + P

  • N is the original length of the signal.
  • P is the number of zeros added for padding.

For a signal of length N = 8 and adding P = 8 zeros, the new signal length becomes N_{\text{padded}} = 16 . This improves the frequency resolution, helping in analyzing frequency components with more precision.

Applications of Zero Padding in FFT

  • Improved Frequency Resolution: Zero padding enhances the frequency resolution, making it easier to detect subtle differences in the frequency components of a signal.
  • Aligning Signals: In cases where signals of different lengths need to be compared, zero padding helps align them to a common length without altering the original signal content.

Benefits of Zero Padding in Signal Processing

  • Increased Frequency Resolution: More data points in the frequency domain, allowing for finer analysis.
  • Simplicity in Signal Comparison: Helps in aligning signals of different lengths for comparison in FFT analysis.
  • Preservation of Signal Structure: Ensures that the original signal is not altered, except for the added zeros at the end.

Although zero padding is a powerful technique, it has some drawbacks:

  • Artificial Data Introduction : Zero padding introduces artificial zeros into the data, which may affect the learning process if not handled carefully.
  • Increased Computational Complexity : Padding increases the size of the data, leading to higher computational costs, especially in large datasets.
  • Potential Overfitting : In some cases, zero padding may lead to overfitting, especially when the model becomes too sensitive to the padded values.

In some situations, alternatives to zero padding might be more effective:

  • Reflect Padding: Instead of padding with zeros, the input is padded with reflections of the input values at the boundary.
  • Edge Padding: The input is padded with the values from the nearest edge pixel, maintaining a closer relationship between the padded values and the input data.

Both techniques address the problem of introducing artificial zeros, which can distort the convolution results near the edges.

Zero padding is an essential tool across both deep learning and signal processing. In deep learning, zero padding helps maintain the dimensions of feature maps in CNNs and ensures uniform sequence lengths in RNNs. In signal processing, it improves frequency resolution in FFT and preserves image size during convolution operations in image processing. While zero padding introduces artificial data and may increase computational load, its advantages in preserving dimensions, enabling batch processing, and improving analysis accuracy make it indispensable in modern computational techniques.

Please Login to comment...

Similar reads.

  • Signal Processing
  • AI-ML-DS With Python
  • Best External Hard Drives for Mac in 2024: Top Picks for MacBook Pro, MacBook Air & More
  • How to Watch NFL Games Live Streams Free
  • OpenAI o1 AI Model Launched: Explore o1-Preview, o1-Mini, Pricing & Comparison
  • How to Merge Cells in Google Sheets: Step by Step Guide
  • #geekstreak2024 – 21 Days POTD Challenge Powered By Deutsche Bank

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

Navigation Menu

Search code, repositories, users, issues, pull requests..., provide feedback.

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly.

To see all available qualifiers, see our documentation .

deep-learning-specialization

Here are 29 public repositories matching this topic..., abdur75648 / deep-learning-specialization-coursera.

This repo contains the updated version of all the assignments/labs (done by me) of Deep Learning Specialization on Coursera by Andrew Ng. It includes building various deep learning models from scratch and implementing them for object detection, facial recognition, autonomous driving, neural machine translation, trigger word detection, etc.

  • Updated Feb 9, 2024
  • Jupyter Notebook

quanghuy0497 / Deep-Learning-Specialization

Coursera's Deep Learning Specialization offered by deeplearning.ai.

  • Updated Aug 30, 2021

abhishektripathi24 / Deep-Learning-Specialization-Coursera

A deep learning specialization series of 5 courses offered by Andrew Ng at Coursera

  • Updated Apr 7, 2020

edaaydinea / UpSchool-Google-Developers-Machine-Learning-Program

This repository contains all materials, and projects related to Machine Learning, Deep Learning, TensorFlow, and Data Engineering within the 4.5-month Up School & Google Developers Machine Learning Program.

  • Updated Feb 28, 2023

3outeille / Coursera-Labs

Coursera Lab Assignments of "Machine Learning" and "Deep Learning Specialization".

  • Updated Jun 18, 2020

arasgungore / DeepLearning.AI-Deep-Learning

My solutions to the assignments in the Deep Learning Specialization offered by DeepLearning.AI on Coursera.

  • Updated Feb 4, 2024

anoushkrit / MOOCs

This repository contains all of the data of the MOOCs that I have taken.

  • Updated Oct 19, 2022

BIJOY-SUST / DL-Coursera

Python assignments for the deep learning class by Andrew ng on Coursera.

  • Updated Aug 31, 2021

BMCARDONA / Deep-Learning-Specialization-Research-Papers

A curated list of research papers from the Deep Learning Specialization by Andrew Ng.

  • Updated Aug 12, 2023

MuhammadTayyab-SE / Deep-Learning-Specialization

  • Updated May 15, 2023

kool7 / Deep_Learning_Specialization_Coursera_2020

Coursera Deep Learning Specialization by deeplearning.ai taught by Andrew Ng.

  • Updated Oct 15, 2020

itahirmasood / coursera-deep-learning-specialization-by-deeplearning-ai

Graded assignments of all the courses that are being offered in Coursera Deep Learning Specialization by DeepLearning.AI. (i) Neural Networks and Deep Learning; (ii) Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization; (iii) Structuring Machine Learning Projects; (iv) Convolutional Neural Network (v) Squence Model

  • Updated Aug 13, 2023

Steve-YJ / Google-ML-Bootcamp-2021-Kor

In this repository, I recorded everything that I experienced when participated in the Google Developers Machine Learning Bootcamp 2021 Korea. From 21.08.12.Thur

  • Updated Oct 31, 2021

aasu14 / Deep-Learning-Specialization

deeplearning.ai Course

  • Updated Mar 4, 2019

antonio-f / YOLO_car_detection

YOLO Car Detection

  • Updated Oct 26, 2020

exajobs / deep-vision-collection

A collection of awesome software, libraries, Learning Tutorials, documents, books, resources and interesting stuff about Deep Vision

  • Updated Dec 23, 2021

iamishansharma / Deep-Learning-Specialisation

Programming Assignments and Projects for the course Deep Learning Specialisation (Coursera)

  • Updated Jun 7, 2020

partoftheorigin / deep_learning_ai_specialization

Master Deep Learning, and Break into AI

  • Updated Jan 9, 2018

Mohammad-Rahmdel / Deep_Learning_Specialization

Deep Learning Specialization by Andrew Ng on Coursera

  • Updated Jul 15, 2019

ahmedhamdy2121 / deep-learning-specialization-coursera

Deep Learning Specialization by Andrew Ng on Coursera.

  • Updated Aug 15, 2018

Improve this page

Add a description, image, and links to the deep-learning-specialization topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the deep-learning-specialization topic, visit your repo's landing page and select "manage topics."

IMAGES

  1. Coursera Improving Deep Neural Networks Week 2 Quiz & Programming Assignment

    deep learning assignment questions

  2. Neural Networks and Deep Learning

    deep learning assignment questions

  3. NPTEL Deep Learning Week 8 Assignment Answers 2023 » DBC Itanagar

    deep learning assignment questions

  4. Deep Learning -IITKGP week 9 Assignment answers

    deep learning assignment questions

  5. Deep Learning

    deep learning assignment questions

  6. NPTEL Deep learning Assignment 1 answers January 2023

    deep learning assignment questions

VIDEO

  1. nptel deep learning week-8, assignment-8

  2. Deep learning || Assignment 2 || UCI Abalone dataset

  3. Deep learning week 4 assignment solutions #nptel #deeplearning

  4. NPTEL Deep Learning assignment week 7 answers

  5. # Deep Learning #.# Week

  6. # Deep Learning #.# Week

COMMENTS

  1. amanchadha/coursera-deep-learning-specialization

    Notes, programming assignments and quizzes from all courses within the Coursera Deep Learning specialization offered by deeplearning.ai: (i) Neural Networks and Deep Learning; (ii) Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization; (iii) Structuring Machine Learning Projects; (iv) Convolutional Neural Networks; (v) Sequence Models - amanchadha/coursera-deep ...

  2. Deep Learning Specialization on Coursera

    Deep Learning Specialization 2023 by Andrew Ng on Coursera. - arindam96/deep-learning-specialization-coursera ... This repo contains all my work for this specialization. All the code base, quiz questions, screenshot, and images, are taken from, unless specified, ... Programming Assignment: Deep Neural Network - Application; Course 2 - Improving ...

  3. haocai1992/Deep-Learning-Specialization

    AI is transforming many industries. The Deep Learning Specialization provides a pathway for you to take the definitive step in the world of AI by helping you gain the knowledge and skills to level up your career. Along the way, you will also get career advice from deep learning experts from industry and academia.

  4. Deep Learning Practice Questions

    In this article, we will be going over 50 practice questions related to deep learning. General Terminology and Concepts. What is a neural network? A neural network, also known as an artificial neural network (ANN), is the foundation of deep learning algorithms. It is inspired by the structure of the human brain.

  5. Deep Learning Specialization Coursera [UPDATED Version 2021]

    Announcement [!IMPORTANT] Check our latest paper (accepted in ICDAR'23) on Urdu OCR — This repo contains all of the solved assignments of Coursera's most famous Deep Learning Specialization of 5 courses offered by deeplearning.ai. Instructor: Prof. Andrew Ng What's New. This Specialization was updated in April 2021 to include developments in deep learning and programming frameworks.

  6. Neural Networks and Deep Learning

    Week 1: Introduction to deep learning. Be able to explain the major trends driving the rise of deep learning, and understand where and how it is applied today. Quiz 1: Introduction to deep learning; Week 2: Neural Networks Basics. Learn to set up a machine learning problem with a neural network mindset. Learn to use vectorization to speed up ...

  7. GitHub

    Solutions of Deep Learning Specialization by Andrew Ng on Coursera - muhac/coursera-deep-learning-solutions. ... Programming Assignments Course A Course B Course C Course D Course E; Practice Questions: Course A: Course B: Course C: Course D: ... Practice Questions. Course A - Neural Networks and Deep Learning. Week 1 - Introduction to Deep ...

  8. Computer Science and Engineering

    Week 2. Deep Learning (CS7015): Linearly Separable Boolean Functions. Deep Learning (CS7015): Representation Power of a Network of Perceptrons. Deep Learning (CS7015): Sigmoid Neuron. Deep Learning (CS7015): A typical Supervised Machine Learning Setup. Deep Learning (CS7015): Learning Parameters: (Infeasible) guess work.

  9. CS230 Deep Learning

    Two modules from the deeplearning.ai Deep Learning Specialization on Coursera. You will watch videos at home, solve quizzes and programming assignments hosted on online notebooks. TA-led sections on Fridays: Teaching Assistants will teach you hands-on tips and tricks to succeed in your projects, but also theorethical foundations of deep learning.

  10. The Top 20 Deep Learning Interview Questions and Answers

    The following are some potential questions the interviewer might ask you for a position that involves building or managing deep learning solutions in computer vision like image processing applications. 1. Explain convolutional neural networks and apply the concept to three typical use cases.

  11. Deep Learning Assignment 1

    Instructions. The goal of this assignment is twofold: (i) implement and use gradient descent (and its variants) with backpropagation for a classification task (ii) get familiar with wandb which is a cool tool for running and keeping track of a large number of experiments. We strongly recommend that you work on this assignment in a team of size 2.

  12. Top Deep Learning Interview Questions and Answers (2024)

    Machine learning is the branch of artificial intelligence that uses data to train the machine or computer, which recognize the hidden patterns in data which can be used to take decisions or predictions based on the learning from data. In this article, we'll cover some of the most common Deep Learning Interview Questions and answers and provide detailed answers to help you prepare for your ...

  13. Deep Learning Part 1 (IITM)

    Deep Learning has received a lot of attention over the past few years and has been employed successfully by companies like Google, Microsoft, IBM, Facebook, Twitter etc. to solve a wide range of problems in Computer Vision and Natural Language Processing. ... Average assignment score = 25% of average of best 8 assignments out of the total 12 ...

  14. Programming assignments from all courses in the Coursera Natural

    This Specialization will equip you with the state-of-the-art deep learning techniques needed to build cutting-edge NLP systems: Use logistic regression, naïve Bayes, and word vectors to implement sentiment analysis, complete analogies, and translate words, and use locality sensitive hashing for approximate nearest neighbors.

  15. Deep Learning Examples: Practical Applications in Real Life

    Here are some examples: Chatbots and virtual assistants: Deep learning allows chatbots and virtual assistants like Siri and Alexa to understand the nuances of human speech, respond to questions and requests in a natural way, and even engage in conversations. Machine translation: Deep learning is revolutionizing machine translation by breaking ...

  16. Top Deep Learning Interview Questions and Answers for 2024

    In this deep learning interview question, the interviewee expects you to give a detailed answer. A Feedforward Neural Network signals travel in one direction from input to output. There are no feedback loops; the network considers only the current input. It cannot memorize previous inputs (e.g., CNN).

  17. Training and Validation Loss in Deep Learning

    Deep learning is a subfield of machine learning related to artificial neural networks. The word deep means bigger neural networks with a lot of hidden units. Deep learning's CNN's have proved to be the state-of-the-art technique for image recognition tasks. Keras is a deep learning library in Python which provides an interface for creating an artif

  18. DeepLearning.AI, Stanford University

    Build recommender systems with a collaborative filtering approach & a content-based deep learning method & build a deep reinforcement learning model About this Specialization The Machine Learning Specialization is a foundational online program created in collaboration between DeepLearning.AI and Stanford Online.

  19. Deep Learning Assignment

    This is a micro-facial emotion recognition task based on micro facial expressions dataset. Predict one of 5 emotions which are defined here: 1 -> Happiness. 2 -> Other. 3 -> Anger. 4 -> Contempt. 5 -> Surprise. (Strictly follow the above encoding for your output.) Submission should be as per the format given.

  20. GitHub

    This specialisation has five courses. Courses: Course 1: Neural Networks and Deep Learning. Learning Objectives : Understand the major technology trends driving Deep Learning. Be able to build, train and apply fully connected deep neural networks. Know how to implement efficient (vectorized) neural networks.

  21. Zero Padding in Deep Learning and Signal Processing

    Deep learning is a subfield of machine learning related to artificial neural networks. The word deep means bigger neural networks with a lot of hidden units. Deep learning's CNN's have proved to be the state-of-the-art technique for image recognition tasks. Keras is a deep learning library in Python which provides an interface for creating an artif

  22. deep-learning-specialization · GitHub Topics · GitHub

    This repo contains the updated version of all the assignments/labs (done by me) of Deep Learning Specialization on Coursera by Andrew Ng. It includes building various deep learning models from scratch and implementing them for object detection, facial recognition, autonomous driving, neural machine translation, trigger word detection, etc.