Seq2Seq attention English Translation Code Implementation+Detailed Attention Mechanism
Say nothing but code
Data Loading Class
from io import open
import unicodedata
import re
import random
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch import optim
# Register Driver
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Start Mark
SOS_token = 0
EOS_token = 1
class Lang():
...
Posted on Wed, 27 Oct 2021 13:21:00 -0400 by TheHyipSite
Implementation of automatic parameter adjustment based on Bayesian optimization method
1. Bayesian optimization method
(attention is a method, an idea) Bayesian Optimization establishes an alternative function (probability model) based on the past evaluation results of the objective function to find the value of the minimization objective function. The difference between Bayesian method and random or grid search is that it w ...
Posted on Wed, 27 Oct 2021 11:35:41 -0400 by Joe_Dean
How to deal with missing values in machine learning
How to deal with missing values in machine learning
Note: this data is from kaggle, please stamp for details here , original reference connection, please stamp here , this paper is a long one, which aims to introduce some ideas and details in the process of EDA.
1, Introduction
The purpose of this EDA(Exploratory Data Analysis) is ...
Posted on Thu, 23 Sep 2021 05:58:16 -0400 by g00bster
Human words explain linear regression and gradient descent
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
from sklearn.linear_model import SGDRegressor
import pandas as pd
def linear_model():
# get data
...
Posted on Tue, 21 Sep 2021 16:58:37 -0400 by xwishmasterx