About Machine Learning ( Part 9: Recurrent Neural Network )

Recurrent Neural Networks (RNNs) are a class of neural networks designed for sequential data, making them highly effective for tasks like natural language processing (NLP), time series prediction, and speech recognition. Unlike traditional feedforward networks, RNNs maintain a hidden state that captures temporal dependencies.

How RNNs Work

A traditional feedforward neural network processes inputs independently. However, for sequential tasks, the order of the data is crucial. RNNs address this by maintaining a memory of previous inputs through hidden states.

Read more

About Machine Learning ( Part 8: Convolution Neural Networks )

Convolutional Neural Networks (CNNs) have revolutionized the field of computer vision, enabling significant advancements in image recognition, object detection, and segmentation tasks. This blog will explore the key concepts behind CNNs and their working principles.

What is a CNN?

A Convolutional Neural Network (CNN) is a type of deep learning model specifically designed for processing structured grid data, such as images. Unlike traditional fully connected neural networks, CNNs leverage convolutional layers to capture spatial hierarchies in the data.

Read more

About Machine Learning ( Part 7: Artificial Neural Network )

Bayes’ theorem

$$
P(y|X) = \frac{P(X|y) P(y)}{P(X)}
$$

where:

  • $P(y|X)$: Posterior probability of class $y$ given input $X$.
  • $P(X|y)$: Likelihood of seeing $X$ if the class is $y$.
  • $P(y)$: Prior probability of class $y$.
  • $P(X)$: Total probability of $X$ (normalization factor).

Bayes Network (Bayesian Network, BN)

A Bayesian network (BN) is a graphical model representing probabilistic dependencies between variables. It consists of:

  • Nodes: Represent variables (e.g., symptoms, diseases).
  • Edges: Represent conditional dependencies.

Read more