Neural networks: Difference between revisions

From Computer Science Wiki
No edit summary
 
(2 intermediate revisions by the same user not shown)
Line 39: Line 39:


=== Learning in Perceptrons ===
=== Learning in Perceptrons ===
The perceptron adjusts its weights using a learning algorithm called the **perceptron learning rule**. This rule updates the weights in response to errors in the classification. If the perceptron makes an incorrect prediction, the weights are adjusted to reduce the error.
The perceptron adjusts its weights using a learning algorithm called the perceptron learning rule. This rule updates the weights in response to errors in the classification. If the perceptron makes an incorrect prediction, the weights are adjusted to reduce the error.


The update rule for each weight is:
The update rule for each weight is:
Line 49: Line 49:
Where the weight change <math> \Delta w_i </math> is defined as:
Where the weight change <math> \Delta w_i </math> is defined as:


\[
<math>
\Delta w_i = \eta \cdot (y_{\text{true}} - y_{\text{predicted}}) \cdot x_i
\Delta w_i = \eta \cdot (y_{\text{true}} - y_{\text{predicted}}) \cdot x_i
\]
</math>


Here, \( \eta \) is the learning rate, which controls how much the weights are updated during each iteration.
Here, <math> \eta </math> is the learning rate, which controls how much the weights are updated during each iteration.


== Limitations of the Perceptron ==
== Limitations of the Perceptron ==
Line 59: Line 59:


== From Perceptrons to Multi-Layer Neural Networks ==
== From Perceptrons to Multi-Layer Neural Networks ==
To overcome the limitations of perceptrons, neural networks are extended into multi-layer architectures, known as **Multi-Layer Perceptrons (MLPs)**. An MLP contains one or more hidden layers with neurons that use non-linear activation functions such as **sigmoid**, **ReLU (Rectified Linear Unit)**, or **tanh**. These hidden layers allow the network to learn more complex patterns and solve problems that are not linearly separable.
To overcome the limitations of perceptrons, neural networks are extended into multi-layer architectures, known as Multi-Layer Perceptrons (MLPs). An MLP contains one or more hidden layers with neurons that use non-linear activation functions such as sigmoid, ReLU (Rectified Linear Unit), or tanh. These hidden layers allow the network to learn more complex patterns and solve problems that are not linearly separable.


The process of training a multi-layer neural network is called **backpropagation**. This algorithm uses gradient descent to adjust weights in all layers based on the error from the output, allowing the network to improve its performance over time.
The process of training a multi-layer neural network is called backpropagationr. This algorithm uses gradient descent to adjust weights in all layers based on the error from the output, allowing the network to improve its performance over time.


== Applications of Neural Networks ==
== Applications of Neural Networks ==
Neural networks are widely used across various domains, including:
Neural networks are widely used across various domains, including:
* Image Recognition: Identifying objects or faces in images.
* Image Recognition: Identifying objects or faces in images.
* Natural Language Processing (NLP)**: Language translation, sentiment analysis, and text generation.
* Natural Language Processing (NLP): Language translation, sentiment analysis, and text generation.
* Speech Recognition: Converting spoken language into text.
* Speech Recognition: Converting spoken language into text.
* Recommendation Systems: Suggesting products, movies, or music based on user preferences.
* Recommendation Systems: Suggesting products, movies, or music based on user preferences.

Latest revision as of 08:04, 4 October 2024

Case study notes[1]

Neural Networks Overview[edit]

Neural networks are a set of algorithms inspired by the structure and function of the human brain. They are designed to recognize patterns and learn from data, making them a powerful tool in fields like machine learning and artificial intelligence (AI). Neural networks consist of interconnected units (or "neurons") that work together to process inputs and produce an output. These models are especially useful for tasks such as classification, regression, image recognition, and natural language processing.

Structure of Neural Networks[edit]

At a high level, a neural network is made up of layers:

  • Input Layer: This layer takes in the input features (data) and passes them to the next layer. It doesn't perform any computations but is the gateway for the information.
  • Hidden Layers: These layers sit between the input and output layers. Neurons in these layers transform the input data by applying weights and an activation function to introduce non-linearity into the network.
  • Output Layer: This layer produces the final result of the neural network. It can have one or more neurons, depending on whether the task is binary classification, multi-class classification, or regression.

Each connection between neurons has an associated weight, and each neuron also has a bias. These weights and biases are adjusted through training, allowing the network to learn how to map inputs to outputs.

The Perceptron[edit]

The perceptron is the simplest type of artificial neural network and forms the basis of more complex models. Introduced by Frank Rosenblatt in 1958, it is a binary classifier that maps input vectors to a single output using a step function. The perceptron operates as follows:

Structure of a Perceptron[edit]

A perceptron has a similar structure to the neurons in a neural network:

  • Input Layer: Accepts several input values. For example, for an input vector each value is passed through the perceptron.
  • Weights: Each input is multiplied by a corresponding weight . The weights determine the importance of each input to the final decision.
  • Bias: A bias is added to shift the activation function, allowing more flexible decision boundaries.
  • Summation and Activation: The weighted sum of the inputs plus the bias is computed, and an activation function is applied to the result. For a perceptron, the activation function is usually a step function, which outputs a binary value (0 or 1). This means the perceptron can only classify linearly separable data.

Mathematically, the perceptron can be represented as:

Where:

  • is the output (either 0 or 1),
  • are the weights,
  • are the inputs,
  • is the bias,
  • is the activation function (often a step function).

Learning in Perceptrons[edit]

The perceptron adjusts its weights using a learning algorithm called the perceptron learning rule. This rule updates the weights in response to errors in the classification. If the perceptron makes an incorrect prediction, the weights are adjusted to reduce the error.

The update rule for each weight is:

Where the weight change is defined as:

Here, is the learning rate, which controls how much the weights are updated during each iteration.

Limitations of the Perceptron[edit]

One major limitation of the perceptron is that it can only solve linearly separable problems. This means that if the data points cannot be separated by a straight line, the perceptron will not perform well. For example, the XOR problem is not solvable by a single-layer perceptron.

From Perceptrons to Multi-Layer Neural Networks[edit]

To overcome the limitations of perceptrons, neural networks are extended into multi-layer architectures, known as Multi-Layer Perceptrons (MLPs). An MLP contains one or more hidden layers with neurons that use non-linear activation functions such as sigmoid, ReLU (Rectified Linear Unit), or tanh. These hidden layers allow the network to learn more complex patterns and solve problems that are not linearly separable.

The process of training a multi-layer neural network is called backpropagationr. This algorithm uses gradient descent to adjust weights in all layers based on the error from the output, allowing the network to improve its performance over time.

Applications of Neural Networks[edit]

Neural networks are widely used across various domains, including:

  • Image Recognition: Identifying objects or faces in images.
  • Natural Language Processing (NLP): Language translation, sentiment analysis, and text generation.
  • Speech Recognition: Converting spoken language into text.
  • Recommendation Systems: Suggesting products, movies, or music based on user preferences.

Conclusion[edit]

Neural networks, starting from the basic perceptron model, have evolved into complex systems capable of solving highly sophisticated tasks. While the perceptron is limited to linearly separable data, the development of multi-layer neural networks and non-linear activation functions has enabled significant advancements in AI and machine learning.

These systems continue to transform industries by providing state-of-the-art solutions to problems once thought to be beyond the reach of computers.



References[edit]