优化神经网络训练-李宏毅机器学习 local minima->saddle point/Optimization with Batch/Gradient Descent + Momentum/Adaptive Learning Rate/loss function/batch normalization 2022-07-20 学习 > Machine Learning #AI #Marchine Learning #Deep Learning #critical point #local minima #saddle point #Optimization #Batch #momentun #gradient descent #adam #learning rate #RMSProp #learning rate decay #warm up #loss function #batch normalization #feature normalization #Root mean square #Cross-Entropy #Mean square error
深度学习与反向传递机制-李宏毅机器学习 DL是采用神经网络的ML。 2022-07-16 学习 > Machine Learning #Marchine Learning #Deep Learning #OPA #Neural Network #feedforward #layer #Matrix Operation #backpropagation #activation function
误差及梯度下降-李宏毅机器学习 error = bias + variance \ Gradient Descent comes from Taylor Series 2022-07-14 学习 > Machine Learning #AI #Gradient Descent #Marchine Learning #Deep Learning #Adam #error #bias #variance #regularization
Regression-李宏毅机器学习 我要去补习宝可梦了~ 2022-07-12 学习 > Machine Learning #AI #Marchine Learning #Deep Learning #Adam #regularization #Regression
机器学习介绍-李宏毅机器学习 感谢datawhale提供组队学习机会~ 2022-07-11 学习 > Machine Learning #AI #Marchine Learning #Deep Learning
CentOS7下安装python3.8卸载3.6 VM记得拍快照!拍快照!快照! 2022-07-08 学习 > python #语音识别 #kaldi #python3 #centos7 #python2