当前位置: 首页 > news >正文

面试手撕笔记ML/DL

数据集

数据集的批处理迭代器

Deep-ML | Batch Iterator for Dataset

实现一个批量可迭代函数,该函数在numpy数组X和可选numpy数组y中进行采样。该函数应该生成指定大小的批量。如果提供了y,则该函数应生成(X, y)对的批次;否则,它应该只产生X批次。

Example:

Input:

X = np.array([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]])y = np.array([1, 2, 3, 4, 5])batch_size = 2batch_iterator(X, y, batch_size)

Output:

[[[[1, 2], [3, 4]], [1, 2]],[[[5, 6], [7, 8]], [3, 4]],[[[9, 10]], [5]]]
import numpy as npdef batch_iterator(X, y=None, batch_size=64):n_samples = X.shape[0]batches = []for i in range(0, n_samples, batch_size):begin, end = i, min(i + batch_size, n_samples)if y is not None:batches.append([X[begin:end], y[begin:end]])else:batches.append(X[begin:end])return batches

激活函数

sigmoid


import math
def sigmoid(z: float) -> float:result = 1 / (1 + math.exp(-z))return round(result, 4)

relu

Deep-ML | Implement ReLU Activation Function

leaky relu

Deep-ML | Leaky ReLU Activation Function

def leaky_relu(z: float, alpha: float = 0.01) -> float|int:return z if z > 0 else alpha * z

softmax

Deep-ML | Softmax Activation Function Implementation

import mathdef softmax(scores: list[float]) -> list[float]:exp_scores = [math.exp(score) for score in scores]sum_exp_scores = sum(exp_scores)probabilities = [round(score / sum_exp_scores, 4) for score in exp_scores]return probabilities

梯度下降

使用梯度下降的线性回归(MSE)


import numpy as np
def linear_regression_gradient_descent(X: np.ndarray, y: np.ndarray, alpha: float, iterations: int) -> np.ndarray:m, n = X.shapetheta = np.zeros((n, 1))for _ in range(iterations):predictions = X @ thetaerrors = predictions - y.reshape(-1, 1)updates = X.T @ errors / mtheta -= alpha * updatesreturn np.round(theta.flatten(), 4)

MSE 损失的多种梯度下降

import numpy as npdef gradient_descent(X, y, weights, learning_rate, n_iterations, batch_size=1, method='batch'):m = len(y)for _ in range(n_iterations):if method == 'batch':# Calculate the gradient using all data pointspredictions = X.dot(weights)errors = predictions - ygradient = 2 * X.T.dot(errors) / mweights = weights - learning_rate * gradientelif method == 'stochastic':# Update weights for each data point individuallyfor i in range(m):prediction = X[i].dot(weights)error = prediction - y[i]gradient = 2 * X[i].T.dot(error)weights = weights - learning_rate * gradientelif method == 'mini_batch':# Update weights using sequential batches of data points without shufflingfor i in range(0, m, batch_size):X_batch = X[i:i+batch_size]y_batch = y[i:i+batch_size]predictions = X_batch.dot(weights)errors = predictions - y_batchgradient = 2 * X_batch.T.dot(errors) / batch_sizeweights = weights - learning_rate * gradientreturn weights

性能指标

检测过拟合和欠拟合

Deep-ML | Detect Overfitting or Underfitting

 


http://www.mrgr.cn/news/82483.html

相关文章:

  • 如何理解RDD,以及RDD的五大特性和五大特点。
  • 安卓触摸对焦
  • 算法题(25):只出现一次的数字(三)
  • maven之插件调试
  • [网络安全] DVWA之 Command Injection 攻击姿势及解题详析合集
  • java项目之社区医院信息平台源码(springboot+mysql)
  • 01.02周四F34-Day43打卡
  • 《Spring Framework实战》2:Spring快速入门
  • SpringBoot+Vue养老院管理系统设计与实现【开题报告+程序+安装部署+售后讲解】
  • vue cli更新遇到的问题(vue -V查询版本号不变的问题)
  • 【动手学电机驱动】STM32-MBD(2)将 Simulink 模型部署到 STM32G431 开发板
  • 算法题(24):只出现一次的数字(二)
  • leveldb的DBSequence从哪里来,到哪里去?
  • REMARK-LLM:用于生成大型语言模型的稳健且高效的水印框架
  • TypyScript从入门到精通
  • 运动控制探针功能详细介绍(CODESYS+SV63N伺服)
  • 学习C++:数组
  • imx6q plus , android6.0 , uboot, 调试 5寸屏, logo显示
  • Java(day1)
  • 搭建nginx文件服务器
  • Pentaho Kettle迁移至Oracle的空字符串和NULL的问题处理,大坑!
  • wsl linux CUDA安装、卸载、清理、版本降级、升级过程详解
  • React 中结合 antd 的 Input 组件实现防抖输入
  • 以一个实际例子来学习Linux驱动程序开发之“设备类”的相关知识【利用设备类实现对同一设备类下的多个LED灯实现点亮或关闭】
  • MLAgents - 跑一个Dome
  • python进阶06:MySQL