当前位置: 首页 > news >正文

Coding Our First Neurons

Chapter1:Introducing Neural Network

声明:本篇博客笔记来源于《Neural Networks from scratch in Python》,作者的youtube

其实关于神经网络的入门博主已经写过几篇了,这里就不再赘述,附上链接。
1.一文窥见神经网络
2.神经网络入门(上)
3.神经网络入门(下)





Chapter2:Coding Our First Neurons

2.1 A Single Neuron

# create a neuron with one input and one output
# inputs -> neuron activation(weights*inputs+bias) -> output
# 3 inputs , 3 weights, 1 bias(1 neuron), 1 output
inputs = [1.0, 2.0, 3.0]
weight = [0.2, 0.8, -0.5]
bias = 2
# outputs = weight*inputs+bias
output = (weight[0]*inputs[0]+weight[1]*inputs[1]+weight[2]*inputs[2]+bias)
print(output)

2.2 A Single Neuron with Numpy

import numpy as np
inputs = [1.0, 2.0, 3.0]
weight = [0.2, 0.8, -0.5]
bias = 2
outputs = np.dot(weight, inputs)+bias
print(outputs)

2.3 A Layer of Neurons

# 4 inputs -> 3 neurons(3 bias) -> 3 output
inputs = [1.0, 2.0, 3.0, 2.5]
weights = [[0.2, 0.8, -0.5, 1.0], [0.5, -0.91, 0.26, -0.5], [-0.26, -0.27, 0.17, 0.87]]
bias = [2.0, 3.0, 0.5]
outputs = [weights[0][0]*inputs[0]+weights[0][1]*inputs[1]+weights[0][2]*inputs[2]+weights[0][3]*inputs[3]+bias[0],weights[1][0]*inputs[0]+weights[1][1]*inputs[1]+weights[1][2]*inputs[2]+weights[0][3]*inputs[3]+bias[1],weights[2][0]*inputs[0]+weights[2][1]*inputs[1]+weights[2][2]*inputs[2]+weights[2][3]*inputs[3]+bias[2]]
print(outputs)
inputs = [1.0, 2.0, 3.0, 2.5]
weights = [[0.2, 0.8, -0.5, 1.0], [0.5, -0.91, 0.26, -0.5], [-0.26, -0.27, 0.17, 0.87]]
bias = [2.0, 3.0, 0.5]
outputs = []
sum = 0
for i in range(len(weights)):for j in range(len(inputs)):output = weights[i][j]*inputs[j]print("weight:", weights[i][j], "inputs:", inputs[j])sum += outputprint("bias:", bias[i])sum += bias[i]outputs.append(sum)sum = 0
print(outputs)


书上的写法:

inputs = [1.0, 2.0, 3.0, 2.5]
weights = [[0.2, 0.8, -0.5, 1.0], [0.5, -0.91, 0.26, -0.5], [-0.26, -0.27, 0.17, 0.87]]
# weights[0]是所有inputs值与第一个neuron连接的权重
# weights[1]是所有inputs值与第二个neuron连接的权重
# weights[2]是所有inputs值与第三个neuron连接的权重
biases = [2.0, 3.0, 0.5]
# biases[0]是第一个neuron的偏置
# biases[1]是第二个neuron的偏置
# biases[2]是第三个neuron的偏置layer_outputs = []
# 将所有与neuron连接的weights和每个neuron的bias打包并遍历
# neuron_weights是所有与每个neuron连接的权重
# neuron_bias是每一个neuron的偏置
for neuron_weights, neuron_bias in zip(weights, biases):neuron_output = 0  # 每个neuron乘加完成后清零,以便计算下一个neuron# 将多个inputs与每一个neuron进行进行乘加# n_input是inputs中的每个元素# weight是某组权重中每个元素for n_input, weight in zip(inputs, neuron_weights):neuron_output += n_input * weightneuron_output += neuron_biaslayer_outputs.append(neuron_output)print(layer_outputs)

2.4 A Layer of Neurons with Numpy

import numpy as np
inputs = [1.0, 2.0, 3.0, 2.5]
weights = [[0.2, 0.8, -0.5, 1.0], [0.5, -0.91, 0.26, -0.5], [-0.26, -0.27, 0.17, 0.87]]
biases = [2.0, 3.0, 0.5]
# weights 3*4 inputs 1*4
outputs = np.dot(weights, inputs) + biases
print(outputs)

2.5 A batch of data

lists are useful containers for holding a sample as well as multiple samples that make up a batch of observations.

神经网络为什么要一次输入一个batch的数据呢?
原因有二:
1.数据可以同时进行处理,加速训练过程
2.帮助网络具有泛化能力,每个批次输入到网络的数据,网络都会用这些数据进行训练以便拟合这个批次的数据,如果一个批次的数据较少,那么网络会偏向过拟合到这个批次的少量数据上,如果一个批次的数据多,那么网络考虑拟合的数据就多,

请添加图片描述

请添加图片描述

2.6 A Layer of Neurons & Batch of Data with Numpy

We have a matrix of inputs and a matrix of weights now, and we need to perform the dot product on them somehow, we need to manage both matrices as lists of vectors and perform dot products on all of them in all combinations, resulting in a list of lists of outputs, or a matrix; this operation is called the ​matrix product​.

we mentioned that we need to perform dot products on all of the vectors that consist of both input and weight matrices. As we
have just learned, that’s the operation that the matrix product performs

注意:将list转为nupmy时应当添加一个brackets
在将一个 list 转换为一个 NumPy 数组时,添加一对括号是为了将原始列表作为一个嵌套的列表传递,从而创建一个二维数组(或矩阵)
外层括号定义了数组的“行”,内层元素定义了“列”。因此,[a] 其实是 [ [1, 2, 3] ],明确指定了这是一个二维结构,具有一行三列。

为什么我们需要对权重矩阵进行转置?
在输入变为批次数据时,权重矩阵列维度与输入矩阵行维度不匹配,导致无法执行矩阵乘法


将权重矩阵转置过后

import numpy as np
inputs = [[1.0, 2.0, 3.0, 2.5],[2.0, 5.0, -1.0, 2.0],[-1.5, 2.7, 3.3, -0.8]]
weights = [[0.2, 0.8, -0.5, 1.0], [0.5, -0.91, 0.26, -0.5], [-0.26, -0.27, 0.17, 0.87]]
biases = [2.0, 3.0, 0.5]
# 将`inputs`和`weights`转为`array`是为了利用NumPy库的矩阵运算功能。
# NumPy的`array`对象支持高效的矩阵运算,如点积(dot product),这可以简化代码并提高计算效率。
outputs = np.dot(np.array(inputs), np.array(weights).T) + biases
print(outputs)

http://www.mrgr.cn/news/82229.html

相关文章:

  • kubernets基础入门
  • PCA降维算法详细推导
  • 阿里云 ECS 服务器绑定多个公网IP
  • Python3 正则表达式
  • 第五届电网系统与绿色能源国际学术会议(PGSGE 2025)
  • 探索Wiki:开源知识管理平台及其私有化部署
  • javaEE-多线程进阶-JUC的常见类
  • 设计模式 创建型 工厂模式(Factory Pattern)与 常见技术框架应用 解析
  • 升级Cypress到10.8.0
  • 【老白学 Java】Border 布局管理器
  • C++并发:在线程间共享数据
  • spring boot 异步线程池的使用
  • SpringCloud源码分析-Lettue Redis
  • shell脚本的【算数运算、分支结构、test表达式】
  • 03-类和对象(上)
  • SQL偏移类窗口函数—— LAG()、LEAD()用法详解
  • 单片机-蜂鸣器实验
  • vue视频录制 限制大小,限制时长
  • 思科无线控制器 AC5508 初始化
  • 操作系统课后题总复习
  • SpringCloud源码-Ribbon
  • Docker Compose编排
  • spring boot通过文件配置yaml里面的属性
  • Spring实现Logback日志模板设置动态参数
  • 19712 数字接龙
  • TTL 传输中过期问题定位