Post

Dam101_journal1

Topic : Neural Networks and Deep Learning.

Greetings everyone! Todays journal is about lessons I have learned from the UNIT 1 sessions I have attended. In our recent session, we learned about fundamentals of Neural Networks and Deep Learning with a quick recap on AI and its fundamentals.

In the first session we have done a quick recap on AI where we learned that AI came into existance at 1957 when Arthur Samuel developed a program to play chess against something new apart from human(How did AI come to be?). And over the time, smart people improved AI with computers working faster where algorithms came into play. Then we learned about how AI started helping doctors, making self-driving possible, and even helps in predicting things(How has AI evolved?). After that our Tutor Sir Darshan Subedi explained that to understand Neural Networks and Deep Learning, it is important to distinguish between Machine Learning(ML) and Deep Learning(DL). He taught us that ML involves algorithm that learn patterns from data were as Deep Learning takes it a step further, employing neural networks with multiple layers to extract complex features and representations.

Then we came into the topic of “How do these systems learn?”. At first out tutor asked us about our opinion on the topic but it was conclued that machines learn from experience through a process called training, were models are exposed to many amounts of data, adjusting their parameters to improve the performance over time. While learning can be complex(for machine), as deep learning introduces a new level of complexity. Deep neural networks with multiple layers can learn hierarchical representations, capturing complex patterns and dependencies within the data. Then we ended the first session of UNIT 1 by learning about data and Sir gave us some questions to look through for next class.

Then after going through the questions, I got to know a bit about “what are neuron in AI”, “what is single layer perceptron” and “what variables are associated while modelling a single neuron”.From the above questions I got to know about the things I stated below.

These are my understanding, based on each subtopics.

BASICS OF NEURAL NETWORKS

Neural networks are computational models inspired by the human brain. Composed of layers of interconnected nodes (neurons), each layer serves a unique purpose in processing information.

NEURAL NETWORKS WORKING

Neural Network consists of three layers with each having their specific functions.

Input Layer: Receives the initial data.

Hidden Layers: Process and transform the data through weighted connections.

Output Layer: Produces the final result or prediction.

ACTIVATION FUNCTION

Activation functions introduce non-linearity to the network, allowing it to learn complex patterns and its role is to decide whether a neuron should be activated or not based on the input. There are 3 types of different activation function.

LOSS FUNCTION AND EPOCH

Measures the difference between the predicted output and the actual target(loss function). It represents a single pass through the entire training dataset.

In brief, this journal includes the evolution of artificial intelligence and difference between Machine Learning and Deep Learning. It highlights the layered structure of neural networks and the crucial role of activation functions. Hence, these are all basic concepts or knowledge towards learning Neural Netwoks and Deep Learning.

This post is licensed under CC BY 4.0 by the author.