Package: torch 0.13.0.9000

Daniel Falbel

torch: Tensors and Neural Networks with 'GPU' Acceleration

Provides functionality to define and train neural networks similar to 'PyTorch' by Paszke et al (2019) <doi:10.48550/arXiv.1912.01703> but written entirely in R using the 'libtorch' library. Also supports low-level tensor operations and 'GPU' acceleration.

Authors:Daniel Falbel [aut, cre, cph], Javier Luraschi [aut], Dmitriy Selivanov [ctb], Athos Damiani [ctb], Christophe Regouby [ctb], Krzysztof Joachimiak [ctb], Hamada S. Badr [ctb], Sebastian Fischer [ctb], RStudio [cph]

torch_0.13.0.9000.tar.gz
torch_0.13.0.9000.zip(r-4.5)torch_0.13.0.9000.zip(r-4.4)torch_0.13.0.9000.zip(r-4.3)
torch_0.13.0.9000.tgz(r-4.4-x86_64)torch_0.13.0.9000.tgz(r-4.4-arm64)torch_0.13.0.9000.tgz(r-4.3-x86_64)torch_0.13.0.9000.tgz(r-4.3-arm64)
torch_0.13.0.9000.tar.gz(r-4.5-noble)torch_0.13.0.9000.tar.gz(r-4.4-noble)
torch_0.13.0.9000.tgz(r-4.4-emscripten)torch_0.13.0.9000.tgz(r-4.3-emscripten)
torch.pdf |torch.html
torch/json (API)
NEWS

# Install 'torch' in R:
install.packages('torch', repos = c('https://mlverse.r-universe.dev', 'https://cloud.r-project.org'))

Peer review:

Bug tracker:https://github.com/mlverse/torch/issues

Uses libs:
  • c++– GNU Standard C++ Library v3

On CRAN:

autograddeep-learningtorch

730 exports 489 stars 8.04 score 16 dependencies 32 dependents 11 mentions 1.0k scripts 4.2k downloads

Last updated 6 days agofrom:28a7a40b33. Checks:OK: 5 NOTE: 4. Indexed: yes.

TargetResultDate
Doc / VignettesOKSep 11 2024
R-4.5-win-x86_64NOTESep 11 2024
R-4.5-linux-x86_64NOTESep 11 2024
R-4.4-win-x86_64NOTESep 11 2024
R-4.4-mac-x86_64OKSep 11 2024
R-4.4-mac-aarch64OKSep 11 2024
R-4.3-win-x86_64NOTESep 11 2024
R-4.3-mac-x86_64OKSep 11 2024
R-4.3-mac-aarch64OKSep 11 2024

Exports:%>%as_arrayas_iteratorautograd_backwardautograd_functionautograd_gradautograd_set_grad_modebackends_cudnn_is_availablebackends_cudnn_versionbackends_mkl_is_availablebackends_mkldnn_is_availablebackends_mps_is_availablebackends_openmp_is_availablebuffer_from_torch_tensorcall_torch_functionclone_modulecontrib_sort_verticescuda_amp_grad_scalercuda_current_devicecuda_device_countcuda_empty_cachecuda_get_device_capabilitycuda_get_rng_statecuda_is_availablecuda_memory_statscuda_memory_summarycuda_runtime_versioncuda_set_rng_statecuda_synchronizedataloaderdataloader_make_iterdataloader_nextdatasetdataset_subsetdistr_bernoullidistr_categoricaldistr_chi2distr_gammadistr_mixture_same_familydistr_multivariate_normaldistr_normaldistr_poissonenumerateget_install_libs_urlinstall_torchinstall_torch_from_fileis_dataloaderis_nn_bufferis_nn_moduleis_nn_parameteris_optimizeris_torch_deviceis_torch_dtypeis_torch_layoutis_torch_memory_formatis_torch_qschemeis_undefined_tensoriterable_datasetjit_compilejit_loadjit_opsjit_savejit_save_for_mobilejit_scalarjit_tracejit_trace_modulejit_tuplelinalg_choleskylinalg_cholesky_exlinalg_condlinalg_detlinalg_eiglinalg_eighlinalg_eigvalslinalg_eigvalshlinalg_householder_productlinalg_invlinalg_inv_exlinalg_lstsqlinalg_matrix_normlinalg_matrix_powerlinalg_matrix_ranklinalg_multi_dotlinalg_normlinalg_pinvlinalg_qrlinalg_slogdetlinalg_solvelinalg_solve_triangularlinalg_svdlinalg_svdvalslinalg_tensorinvlinalg_tensorsolvelinalg_vector_normload_state_dictlocal_autocastlocal_devicelocal_enable_gradlocal_no_gradlocal_torch_manual_seedlooplr_cosine_annealinglr_lambdalr_multiplicativelr_one_cyclelr_reduce_on_plateaulr_schedulerlr_stepnn_adaptive_avg_pool1dnn_adaptive_avg_pool2dnn_adaptive_avg_pool3dnn_adaptive_log_softmax_with_lossnn_adaptive_max_pool1dnn_adaptive_max_pool2dnn_adaptive_max_pool3dnn_avg_pool1dnn_avg_pool2dnn_avg_pool3dnn_batch_norm1dnn_batch_norm2dnn_batch_norm3dnn_bce_lossnn_bce_with_logits_lossnn_bilinearnn_buffernn_celunn_contrib_sparsemaxnn_conv_transpose1dnn_conv_transpose2dnn_conv_transpose3dnn_conv1dnn_conv2dnn_conv3dnn_cosine_embedding_lossnn_cross_entropy_lossnn_ctc_lossnn_dropoutnn_dropout2dnn_dropout3dnn_elunn_embeddingnn_embedding_bagnn_flattennn_fractional_max_pool2dnn_fractional_max_pool3dnn_gelunn_glunn_group_normnn_grunn_hardshrinknn_hardsigmoidnn_hardswishnn_hardtanhnn_hinge_embedding_lossnn_identitynn_init_calculate_gainnn_init_constant_nn_init_dirac_nn_init_eye_nn_init_kaiming_normal_nn_init_kaiming_uniform_nn_init_normal_nn_init_ones_nn_init_orthogonal_nn_init_sparse_nn_init_trunc_normal_nn_init_uniform_nn_init_xavier_normal_nn_init_xavier_uniform_nn_init_zeros_nn_kl_div_lossnn_l1_lossnn_layer_normnn_leaky_relunn_linearnn_log_sigmoidnn_log_softmaxnn_lp_pool1dnn_lp_pool2dnn_lstmnn_margin_ranking_lossnn_max_pool1dnn_max_pool2dnn_max_pool3dnn_max_unpool1dnn_max_unpool2dnn_max_unpool3dnn_modulenn_module_dictnn_module_listnn_mse_lossnn_multi_margin_lossnn_multihead_attentionnn_multilabel_margin_lossnn_multilabel_soft_margin_lossnn_nll_lossnn_pairwise_distancenn_parameternn_poisson_nll_lossnn_prelunn_prune_headnn_relunn_relu6nn_rnnnn_rrelunn_selunn_sequentialnn_sigmoidnn_silunn_smooth_l1_lossnn_soft_margin_lossnn_softmaxnn_softmax2dnn_softminnn_softplusnn_softshrinknn_softsignnn_tanhnn_tanhshrinknn_thresholdnn_triplet_margin_lossnn_triplet_margin_with_distance_lossnn_unflattennn_upsamplenn_utils_clip_grad_norm_nn_utils_clip_grad_value_nn_utils_rnn_pack_padded_sequencenn_utils_rnn_pack_sequencenn_utils_rnn_pad_packed_sequencenn_utils_rnn_pad_sequencenn_utils_weight_normnnf_adaptive_avg_pool1dnnf_adaptive_avg_pool2dnnf_adaptive_avg_pool3dnnf_adaptive_max_pool1dnnf_adaptive_max_pool2dnnf_adaptive_max_pool3dnnf_affine_gridnnf_alpha_dropoutnnf_avg_pool1dnnf_avg_pool2dnnf_avg_pool3dnnf_batch_normnnf_bilinearnnf_binary_cross_entropynnf_binary_cross_entropy_with_logitsnnf_celunnf_celu_nnf_contrib_sparsemaxnnf_conv_tbcnnf_conv_transpose1dnnf_conv_transpose2dnnf_conv_transpose3dnnf_conv1dnnf_conv2dnnf_conv3dnnf_cosine_embedding_lossnnf_cosine_similaritynnf_cross_entropynnf_ctc_lossnnf_dropoutnnf_dropout2dnnf_dropout3dnnf_elunnf_elu_nnf_embeddingnnf_embedding_bagnnf_foldnnf_fractional_max_pool2dnnf_fractional_max_pool3dnnf_gelunnf_glunnf_grid_samplennf_group_normnnf_gumbel_softmaxnnf_hardshrinknnf_hardsigmoidnnf_hardswishnnf_hardtanhnnf_hardtanh_nnf_hinge_embedding_lossnnf_instance_normnnf_interpolatennf_kl_divnnf_l1_lossnnf_layer_normnnf_leaky_relunnf_linearnnf_local_response_normnnf_log_softmaxnnf_logsigmoidnnf_lp_pool1dnnf_lp_pool2dnnf_margin_ranking_lossnnf_max_pool1dnnf_max_pool2dnnf_max_pool3dnnf_max_unpool1dnnf_max_unpool2dnnf_max_unpool3dnnf_mse_lossnnf_multi_head_attention_forwardnnf_multi_margin_lossnnf_multilabel_margin_lossnnf_multilabel_soft_margin_lossnnf_nll_lossnnf_normalizennf_one_hotnnf_padnnf_pairwise_distancennf_pdistnnf_pixel_shufflennf_poisson_nll_lossnnf_prelunnf_relunnf_relu_nnf_relu6nnf_rrelunnf_rrelu_nnf_selunnf_selu_nnf_sigmoidnnf_silunnf_smooth_l1_lossnnf_soft_margin_lossnnf_softmaxnnf_softminnnf_softplusnnf_softshrinknnf_softsignnnf_tanhshrinknnf_thresholdnnf_threshold_nnf_triplet_margin_lossnnf_triplet_margin_with_distance_lossnnf_unfoldoptim_adadeltaoptim_adagradoptim_adamoptim_adamwoptim_asgdoptim_lbfgsoptim_rmspropoptim_rpropoptim_sgdoptimizersamplerset_autocastslctensor_datasettorch_abstorch_absolutetorch_acostorch_acoshtorch_adaptive_avg_pool1dtorch_addtorch_addbmmtorch_addcdivtorch_addcmultorch_addmmtorch_addmvtorch_addrtorch_allclosetorch_amaxtorch_amintorch_angletorch_arangetorch_arccostorch_arccoshtorch_arcsintorch_arcsinhtorch_arctantorch_arctanhtorch_argmaxtorch_argmintorch_argsorttorch_as_stridedtorch_asintorch_asinhtorch_atantorch_atan2torch_atanhtorch_atleast_1dtorch_atleast_2dtorch_atleast_3dtorch_avg_pool1dtorch_baddbmmtorch_bartlett_windowtorch_bernoullitorch_bincounttorch_bitwise_andtorch_bitwise_nottorch_bitwise_ortorch_bitwise_xortorch_blackman_windowtorch_block_diagtorch_bmmtorch_booltorch_broadcast_tensorstorch_bucketizetorch_can_casttorch_cartesian_prodtorch_cattorch_cdisttorch_cdoubletorch_ceiltorch_celutorch_celu_torch_cfloattorch_cfloat128torch_cfloat32torch_cfloat64torch_chain_matmultorch_chalftorch_channel_shuffletorch_channels_last_formattorch_choleskytorch_cholesky_inversetorch_cholesky_solvetorch_chunktorch_clamptorch_cliptorch_clonetorch_combinationstorch_complextorch_conjtorch_contiguous_formattorch_conv_tbctorch_conv_transpose1dtorch_conv_transpose2dtorch_conv_transpose3dtorch_conv1dtorch_conv2dtorch_conv3dtorch_costorch_coshtorch_cosine_similaritytorch_count_nonzerotorch_crosstorch_cummaxtorch_cummintorch_cumprodtorch_cumsumtorch_deg2radtorch_dequantizetorch_dettorch_devicetorch_diagtorch_diag_embedtorch_diagflattorch_diagonaltorch_difftorch_digammatorch_disttorch_divtorch_dividetorch_dottorch_doubletorch_dstacktorch_einsumtorch_emptytorch_empty_liketorch_empty_stridedtorch_eqtorch_equaltorch_erftorch_erfctorch_erfinvtorch_exptorch_exp2torch_expm1torch_eyetorch_fft_ffttorch_fft_fftfreqtorch_fft_iffttorch_fft_irffttorch_fft_rffttorch_finfotorch_fixtorch_flattentorch_fliptorch_fliplrtorch_flipudtorch_floattorch_float16torch_float32torch_float64torch_floortorch_floor_dividetorch_fmodtorch_fractorch_fulltorch_full_liketorch_gathertorch_gcdtorch_getorch_generatortorch_geqrftorch_gertorch_get_default_dtypetorch_get_num_interop_threadstorch_get_num_threadstorch_get_rng_statetorch_greatertorch_greater_equaltorch_gttorch_halftorch_hamming_windowtorch_hann_windowtorch_heavisidetorch_histctorch_hstacktorch_hypottorch_i0torch_iinfotorch_imagtorch_indextorch_index_puttorch_index_put_torch_index_selecttorch_install_pathtorch_inttorch_int16torch_int32torch_int64torch_int8torch_inversetorch_is_complextorch_is_floating_pointtorch_is_installedtorch_is_nonzerotorch_isclosetorch_isfinitetorch_isinftorch_isnantorch_isneginftorch_isposinftorch_isrealtorch_istfttorch_kaiser_windowtorch_krontorch_kthvaluetorch_lcmtorch_letorch_lerptorch_lesstorch_less_equaltorch_lgammatorch_linspacetorch_loadtorch_logtorch_log10torch_log1ptorch_log2torch_logaddexptorch_logaddexp2torch_logcumsumexptorch_logdettorch_logical_andtorch_logical_nottorch_logical_ortorch_logical_xortorch_logittorch_logspacetorch_logsumexptorch_longtorch_lttorch_lutorch_lu_solvetorch_lu_unpacktorch_manual_seedtorch_masked_selecttorch_matmultorch_matrix_exptorch_matrix_powertorch_maxtorch_maximumtorch_meantorch_mediantorch_meshgridtorch_mintorch_minimumtorch_mmtorch_modetorch_movedimtorch_multorch_multinomialtorch_multiplytorch_mvtorch_mvlgammatorch_nanquantiletorch_nansumtorch_narrowtorch_netorch_negtorch_negativetorch_nextaftertorch_nonzerotorch_normtorch_normaltorch_not_equaltorch_onestorch_ones_liketorch_orgqrtorch_ormqrtorch_outertorch_pdisttorch_per_channel_affinetorch_per_channel_symmetrictorch_per_tensor_affinetorch_per_tensor_symmetrictorch_pinversetorch_pixel_shuffletorch_poissontorch_polartorch_polygammatorch_powtorch_preserve_formattorch_prodtorch_promote_typestorch_qint32torch_qint8torch_qrtorch_quantiletorch_quantize_per_channeltorch_quantize_per_tensortorch_quint8torch_rad2degtorch_randtorch_rand_liketorch_randinttorch_randint_liketorch_randntorch_randn_liketorch_randpermtorch_rangetorch_realtorch_reciprocaltorch_reduction_meantorch_reduction_nonetorch_reduction_sumtorch_relutorch_relu_torch_remaindertorch_renormtorch_repeat_interleavetorch_reshapetorch_result_typetorch_rolltorch_rot90torch_roundtorch_rrelu_torch_rsqrttorch_savetorch_scalar_tensortorch_searchsortedtorch_selutorch_selu_torch_serializetorch_set_default_dtypetorch_set_num_interop_threadstorch_set_num_threadstorch_set_rng_statetorch_sgntorch_shorttorch_sigmoidtorch_signtorch_signbittorch_sintorch_sinhtorch_slogdettorch_sorttorch_sparse_cootorch_sparse_coo_tensortorch_splittorch_sqrttorch_squaretorch_squeezetorch_stacktorch_stdtorch_std_meantorch_stfttorch_stridedtorch_subtorch_subtracttorch_sumtorch_svdtorch_ttorch_taketorch_tantorch_tanhtorch_tensortorch_tensor_from_buffertorch_tensordottorch_threshold_torch_topktorch_tracetorch_transposetorch_trapztorch_triangular_solvetorch_triltorch_tril_indicestorch_triutorch_triu_indicestorch_true_dividetorch_trunctorch_uint8torch_unbindtorch_unique_consecutivetorch_unsafe_chunktorch_unsafe_splittorch_unsqueezetorch_vandertorch_vartorch_var_meantorch_vdottorch_view_as_complextorch_view_as_realtorch_vstacktorch_wheretorch_zerostorch_zeros_likeunset_autocastwith_autocastwith_detect_anomalywith_devicewith_enable_gradwith_no_gradwith_torch_manual_seedyield

Dependencies:bitbit64callrclicorodescgluejsonlitemagrittrprocessxpsR6Rcpprlangsafetensorswithr

Creating tensors

Rendered fromtensor-creation.Rmdusingknitr::rmarkdownon Sep 11 2024.

Last update: 2022-02-04
Started: 2020-06-27

Distributions

Rendered fromdistributions.Rmdusingknitr::rmarkdownon Sep 11 2024.

Last update: 2022-02-04
Started: 2021-04-27

Extending Autograd

Rendered fromextending-autograd.Rmdusingknitr::rmarkdownon Sep 11 2024.

Last update: 2022-02-04
Started: 2020-04-29

Indexing tensors

Rendered fromindexing.Rmdusingknitr::rmarkdownon Sep 11 2024.

Last update: 2023-08-10
Started: 2020-04-29

Installation

Rendered frominstallation.Rmdusingknitr::rmarkdownon Sep 11 2024.

Last update: 2024-07-02
Started: 2020-10-08

Loading data

Rendered fromloading-data.Rmdusingknitr::rmarkdownon Sep 11 2024.

Last update: 2022-02-15
Started: 2020-07-01

Python to R

Rendered frompython-to-r.Rmdusingknitr::rmarkdownon Sep 11 2024.

Last update: 2022-02-25
Started: 2021-02-11

Serialization

Rendered fromserialization.Rmdusingknitr::rmarkdownon Sep 11 2024.

Last update: 2023-08-08
Started: 2020-09-24

TorchScript

Rendered fromtorchscript.Rmdusingknitr::rmarkdownon Sep 11 2024.

Last update: 2022-02-04
Started: 2021-07-01

Using autograd

Rendered fromusing-autograd.Rmdusingknitr::rmarkdownon Sep 11 2024.

Last update: 2022-02-04
Started: 2020-04-29

Readme and manuals

Help Manual

Help pageTopics
Converts to arrayas_array
Computes the sum of gradients of given tensors w.r.t. graph leaves.autograd_backward
Records operation history and defines formulas for differentiating ops.autograd_function
Computes and returns the sum of gradients of outputs w.r.t. the inputs.autograd_grad
Set grad modeautograd_set_grad_mode
Class representing the context.AutogradContext
CuDNN is availablebackends_cudnn_is_available
CuDNN versionbackends_cudnn_version
MKL is availablebackends_mkl_is_available
MKLDNN is availablebackends_mkldnn_is_available
MPS is availablebackends_mps_is_available
OpenMP is availablebackends_openmp_is_available
Given a list of values (possibly containing numbers), returns a list where each value is broadcasted based on the following rules:broadcast_all
Clone a torch module.clone_module
Abstract base class for constraints.Constraint
Contrib sort verticescontrib_sort_vertices
Creates a gradient scalercuda_amp_grad_scaler
Returns the index of a currently selected device.cuda_current_device
Returns the number of GPUs available.cuda_device_count
Empty cachecuda_empty_cache
Returns the major and minor CUDA capability of 'device'cuda_get_device_capability
Returns a bool indicating if CUDA is currently available.cuda_is_available
Returns a dictionary of CUDA memory allocator statistics for a given device.cuda_memory_stats cuda_memory_summary
Returns the CUDA runtime versioncuda_runtime_version
Waits for all kernels in all streams on a CUDA device to complete.cuda_synchronize
Data loader. Combines a dataset and a sampler, and provides single- or multi-process iterators over the dataset.dataloader
Creates an iterator from a DataLoaderdataloader_make_iter
Get the next element of a dataloader iteratordataloader_next
Helper function to create an function that generates R6 instances of class 'dataset'dataset
Dataset Subsetdataset_subset
Creates a Bernoulli distribution parameterized by 'probs' or 'logits' (but not both). Samples are binary (0 or 1). They take the value '1' with probability 'p' and '0' with probability '1 - p'.distr_bernoulli
Creates a categorical distribution parameterized by either 'probs' or 'logits' (but not both).distr_categorical
Creates a Chi2 distribution parameterized by shape parameter 'df'. This is exactly equivalent to 'distr_gamma(alpha=0.5*df, beta=0.5)'distr_chi2
Creates a Gamma distribution parameterized by shape 'concentration' and 'rate'.distr_gamma
Mixture of components in the same familydistr_mixture_same_family
Gaussian distributiondistr_multivariate_normal
Creates a normal (also called Gaussian) distribution parameterized by 'loc' and 'scale'.distr_normal
Creates a Poisson distribution parameterized by 'rate', the rate parameter.distr_poisson
Generic R6 class representing distributionsDistribution
Enumerate an iteratorenumerate
Enumerate an iteratorenumerate.dataloader
Install Torch from filesget_install_libs_url install_torch_from_file
Install Torchinstall_torch
Checks if the object is a dataloaderis_dataloader
Checks if the object is a nn_bufferis_nn_buffer
Checks if the object is an nn_moduleis_nn_module
Checks if an object is a nn_parameteris_nn_parameter
Checks if the object is a torch optimizeris_optimizer
Checks if object is a deviceis_torch_device
Check if object is a torch data typeis_torch_dtype
Check if an object is a torch layout.is_torch_layout
Check if an object is a memory formatis_torch_memory_format
Checks if an object is a QSchemeis_torch_qscheme
Checks if a tensor is undefinedis_undefined_tensor
Creates an iterable datasetiterable_dataset
Compile TorchScript code into a graphjit_compile
Loads a 'script_function' or 'script_module' previously saved with 'jit_save'jit_load
Enable idiomatic access to JIT operators from R.jit_ops
Saves a 'script_function' to a pathjit_save
Saves a 'script_function' or 'script_module' in bytecode form, to be loaded on a mobile devicejit_save_for_mobile
Adds the 'jit_scalar' class to the inputjit_scalar
Trace a function and return an executable 'script_function'.jit_trace
Trace a modulejit_trace_module
Adds the 'jit_tuple' class to the inputjit_tuple
Computes the Cholesky decomposition of a complex Hermitian or real symmetric positive-definite matrix.linalg_cholesky
Computes the Cholesky decomposition of a complex Hermitian or real symmetric positive-definite matrix.linalg_cholesky_ex
Computes the condition number of a matrix with respect to a matrix norm.linalg_cond
Computes the determinant of a square matrix.linalg_det
Computes the eigenvalue decomposition of a square matrix if it exists.linalg_eig
Computes the eigenvalue decomposition of a complex Hermitian or real symmetric matrix.linalg_eigh
Computes the eigenvalues of a square matrix.linalg_eigvals
Computes the eigenvalues of a complex Hermitian or real symmetric matrix.linalg_eigvalsh
Computes the first 'n' columns of a product of Householder matrices.linalg_householder_product
Computes the inverse of a square matrix if it exists.linalg_inv
Computes the inverse of a square matrix if it is invertible.linalg_inv_ex
Computes a solution to the least squares problem of a system of linear equations.linalg_lstsq
Computes a matrix norm.linalg_matrix_norm
Computes the 'n'-th power of a square matrix for an integer 'n'.linalg_matrix_power
Computes the numerical rank of a matrix.linalg_matrix_rank
Efficiently multiplies two or more matriceslinalg_multi_dot
Computes a vector or matrix norm.linalg_norm
Computes the pseudoinverse (Moore-Penrose inverse) of a matrix.linalg_pinv
Computes the QR decomposition of a matrix.linalg_qr
Computes the sign and natural logarithm of the absolute value of the determinant of a square matrix.linalg_slogdet
Computes the solution of a square system of linear equations with a unique solution.linalg_solve
Triangular solvelinalg_solve_triangular
Computes the singular value decomposition (SVD) of a matrix.linalg_svd
Computes the singular values of a matrix.linalg_svdvals
Computes the multiplicative inverse of 'torch_tensordot()'linalg_tensorinv
Computes the solution 'X' to the system 'torch_tensordot(A, X) = B'.linalg_tensorsolve
Computes a vector norm.linalg_vector_norm
Load a state dict fileload_state_dict
Autocast context managerlocal_autocast set_autocast unset_autocast with_autocast
Device contextslocal_device with_device
Set the learning rate of each parameter group using a cosine annealing schedulelr_cosine_annealing
Sets the learning rate of each parameter group to the initial lr times a given function. When last_epoch=-1, sets initial lr as lr.lr_lambda
Multiply the learning rate of each parameter group by the factor given in the specified function. When last_epoch=-1, sets initial lr as lr.lr_multiplicative
Once cycle learning ratelr_one_cycle
Reduce learning rate on plateaulr_reduce_on_plateau
Creates learning rate schedulerslr_scheduler
Step learning rate decaylr_step
Applies a 1D adaptive average pooling over an input signal composed of several input planes.nn_adaptive_avg_pool1d
Applies a 2D adaptive average pooling over an input signal composed of several input planes.nn_adaptive_avg_pool2d
Applies a 3D adaptive average pooling over an input signal composed of several input planes.nn_adaptive_avg_pool3d
AdaptiveLogSoftmaxWithLoss modulenn_adaptive_log_softmax_with_loss
Applies a 1D adaptive max pooling over an input signal composed of several input planes.nn_adaptive_max_pool1d
Applies a 2D adaptive max pooling over an input signal composed of several input planes.nn_adaptive_max_pool2d
Applies a 3D adaptive max pooling over an input signal composed of several input planes.nn_adaptive_max_pool3d
Applies a 1D average pooling over an input signal composed of several input planes.nn_avg_pool1d
Applies a 2D average pooling over an input signal composed of several input planes.nn_avg_pool2d
Applies a 3D average pooling over an input signal composed of several input planes.nn_avg_pool3d
BatchNorm1D modulenn_batch_norm1d
BatchNorm2Dnn_batch_norm2d
BatchNorm3Dnn_batch_norm3d
Binary cross entropy lossnn_bce_loss
BCE with logits lossnn_bce_with_logits_loss
Bilinear modulenn_bilinear
Creates a nn_buffernn_buffer
CELU modulenn_celu
Sparsemax activationnn_contrib_sparsemax
ConvTranspose1Dnn_conv_transpose1d
ConvTranpose2D modulenn_conv_transpose2d
ConvTranpose3D modulenn_conv_transpose3d
Conv1D modulenn_conv1d
Conv2D modulenn_conv2d
Conv3D modulenn_conv3d
Cosine embedding lossnn_cosine_embedding_loss
CrossEntropyLoss modulenn_cross_entropy_loss
The Connectionist Temporal Classification loss.nn_ctc_loss
Dropout modulenn_dropout
Dropout2D modulenn_dropout2d
Dropout3D modulenn_dropout3d
ELU modulenn_elu
Embedding modulenn_embedding
Embedding bag modulenn_embedding_bag
Flattens a contiguous range of dims into a tensor.nn_flatten
Applies a 2D fractional max pooling over an input signal composed of several input planes.nn_fractional_max_pool2d
Applies a 3D fractional max pooling over an input signal composed of several input planes.nn_fractional_max_pool3d
GELU modulenn_gelu
GLU modulenn_glu
Group normalizationnn_group_norm
Applies a multi-layer gated recurrent unit (GRU) RNN to an input sequence.nn_gru
Hardshwink modulenn_hardshrink
Hardsigmoid modulenn_hardsigmoid
Hardswish modulenn_hardswish
Hardtanh modulenn_hardtanh
Hinge embedding lossnn_hinge_embedding_loss
Identity modulenn_identity
Calculate gainnn_init_calculate_gain
Constant initializationnn_init_constant_
Dirac initializationnn_init_dirac_
Eye initializationnn_init_eye_
Kaiming normal initializationnn_init_kaiming_normal_
Kaiming uniform initializationnn_init_kaiming_uniform_
Normal initializationnn_init_normal_
Ones initializationnn_init_ones_
Orthogonal initializationnn_init_orthogonal_
Sparse initializationnn_init_sparse_
Truncated normal initializationnn_init_trunc_normal_
Uniform initializationnn_init_uniform_
Xavier normal initializationnn_init_xavier_normal_
Xavier uniform initializationnn_init_xavier_uniform_
Zeros initializationnn_init_zeros_
Kullback-Leibler divergence lossnn_kl_div_loss
L1 lossnn_l1_loss
Layer normalizationnn_layer_norm
LeakyReLU modulenn_leaky_relu
Linear modulenn_linear
LogSigmoid modulenn_log_sigmoid
LogSoftmax modulenn_log_softmax
Applies a 1D power-average pooling over an input signal composed of several input planes.nn_lp_pool1d
Applies a 2D power-average pooling over an input signal composed of several input planes.nn_lp_pool2d
Applies a multi-layer long short-term memory (LSTM) RNN to an input sequence.nn_lstm
Margin ranking lossnn_margin_ranking_loss
MaxPool1D modulenn_max_pool1d
MaxPool2D modulenn_max_pool2d
Applies a 3D max pooling over an input signal composed of several input planes.nn_max_pool3d
Computes a partial inverse of 'MaxPool1d'.nn_max_unpool1d
Computes a partial inverse of 'MaxPool2d'.nn_max_unpool2d
Computes a partial inverse of 'MaxPool3d'.nn_max_unpool3d
Base class for all neural network modules.nn_module
Container that allows named valuesnn_module_dict
Holds submodules in a list.nn_module_list
MSE lossnn_mse_loss
Multi margin lossnn_multi_margin_loss
MultiHead attentionnn_multihead_attention
Multilabel margin lossnn_multilabel_margin_loss
Multi label soft margin lossnn_multilabel_soft_margin_loss
Nll lossnn_nll_loss
Pairwise distancenn_pairwise_distance
Creates an 'nn_parameter'nn_parameter
Poisson NLL lossnn_poisson_nll_loss
PReLU modulenn_prelu
Prune top layer(s) of a networknn_prune_head
ReLU modulenn_relu
ReLu6 modulenn_relu6
RNN modulenn_rnn
RReLU modulenn_rrelu
SELU modulenn_selu
A sequential containernn_sequential
Sigmoid modulenn_sigmoid
Applies the Sigmoid Linear Unit (SiLU) function, element-wise. The SiLU function is also known as the swish function.nn_silu
Smooth L1 lossnn_smooth_l1_loss
Soft margin lossnn_soft_margin_loss
Softmax modulenn_softmax
Softmax2d modulenn_softmax2d
Softminnn_softmin
Softplus modulenn_softplus
Softshrink modulenn_softshrink
Softsign modulenn_softsign
Tanh modulenn_tanh
Tanhshrink modulenn_tanhshrink
Threshold modulenn_threshold
Triplet margin lossnn_triplet_margin_loss
Triplet margin with distance lossnn_triplet_margin_with_distance_loss
Unflattens a tensor dim expanding it to a desired shape. For use with [nn_sequential.nn_unflatten
Upsample modulenn_upsample
Clips gradient norm of an iterable of parameters.nn_utils_clip_grad_norm_
Clips gradient of an iterable of parameters at specified value.nn_utils_clip_grad_value_
Packs a Tensor containing padded sequences of variable length.nn_utils_rnn_pack_padded_sequence
Packs a list of variable length Tensorsnn_utils_rnn_pack_sequence
Pads a packed batch of variable length sequences.nn_utils_rnn_pad_packed_sequence
Pad a list of variable length Tensors with 'padding_value'nn_utils_rnn_pad_sequence
nn_utils_weight_normnn_utils_weight_norm
Adaptive_avg_pool1dnnf_adaptive_avg_pool1d
Adaptive_avg_pool2dnnf_adaptive_avg_pool2d
Adaptive_avg_pool3dnnf_adaptive_avg_pool3d
Adaptive_max_pool1dnnf_adaptive_max_pool1d
Adaptive_max_pool2dnnf_adaptive_max_pool2d
Adaptive_max_pool3dnnf_adaptive_max_pool3d
Affine_gridnnf_affine_grid
Alpha_dropoutnnf_alpha_dropout
Avg_pool1dnnf_avg_pool1d
Avg_pool2dnnf_avg_pool2d
Avg_pool3dnnf_avg_pool3d
Batch_normnnf_batch_norm
Bilinearnnf_bilinear
Binary_cross_entropynnf_binary_cross_entropy
Binary_cross_entropy_with_logitsnnf_binary_cross_entropy_with_logits
Celunnf_celu nnf_celu_
Sparsemaxnnf_contrib_sparsemax
Conv_tbcnnf_conv_tbc
Conv_transpose1dnnf_conv_transpose1d
Conv_transpose2dnnf_conv_transpose2d
Conv_transpose3dnnf_conv_transpose3d
Conv1dnnf_conv1d
Conv2dnnf_conv2d
Conv3dnnf_conv3d
Cosine_embedding_lossnnf_cosine_embedding_loss
Cosine_similaritynnf_cosine_similarity
Cross_entropynnf_cross_entropy
Ctc_lossnnf_ctc_loss
Dropoutnnf_dropout
Dropout2dnnf_dropout2d
Dropout3dnnf_dropout3d
Elunnf_elu nnf_elu_
Embeddingnnf_embedding
Embedding_bagnnf_embedding_bag
Foldnnf_fold
Fractional_max_pool2dnnf_fractional_max_pool2d
Fractional_max_pool3dnnf_fractional_max_pool3d
Gelunnf_gelu
Glunnf_glu
Grid_samplennf_grid_sample
Group_normnnf_group_norm
Gumbel_softmaxnnf_gumbel_softmax
Hardshrinknnf_hardshrink
Hardsigmoidnnf_hardsigmoid
Hardswishnnf_hardswish
Hardtanhnnf_hardtanh nnf_hardtanh_
Hinge_embedding_lossnnf_hinge_embedding_loss
Instance_normnnf_instance_norm
Interpolatennf_interpolate
Kl_divnnf_kl_div
L1_lossnnf_l1_loss
Layer_normnnf_layer_norm
Leaky_relunnf_leaky_relu
Linearnnf_linear
Local_response_normnnf_local_response_norm
Log_softmaxnnf_log_softmax
Logsigmoidnnf_logsigmoid
Lp_pool1dnnf_lp_pool1d
Lp_pool2dnnf_lp_pool2d
Margin_ranking_lossnnf_margin_ranking_loss
Max_pool1dnnf_max_pool1d
Max_pool2dnnf_max_pool2d
Max_pool3dnnf_max_pool3d
Max_unpool1dnnf_max_unpool1d
Max_unpool2dnnf_max_unpool2d
Max_unpool3dnnf_max_unpool3d
Mse_lossnnf_mse_loss
Multi head attention forwardnnf_multi_head_attention_forward
Multi_margin_lossnnf_multi_margin_loss
Multilabel_margin_lossnnf_multilabel_margin_loss
Multilabel_soft_margin_lossnnf_multilabel_soft_margin_loss
Nll_lossnnf_nll_loss
Normalizennf_normalize
One_hotnnf_one_hot
Padnnf_pad
Pairwise_distancennf_pairwise_distance
Pdistnnf_pdist
Pixel_shufflennf_pixel_shuffle
Poisson_nll_lossnnf_poisson_nll_loss
Prelunnf_prelu
Relunnf_relu nnf_relu_
Relu6nnf_relu6
Rrelunnf_rrelu nnf_rrelu_
Selunnf_selu nnf_selu_
Sigmoidnnf_sigmoid
Applies the Sigmoid Linear Unit (SiLU) function, element-wise. See 'nn_silu()' for more information.nnf_silu
Smooth_l1_lossnnf_smooth_l1_loss
Soft_margin_lossnnf_soft_margin_loss
Softmaxnnf_softmax
Softminnnf_softmin
Softplusnnf_softplus
Softshrinknnf_softshrink
Softsignnnf_softsign
Tanhshrinknnf_tanhshrink
Thresholdnnf_threshold nnf_threshold_
Triplet_margin_lossnnf_triplet_margin_loss
Triplet margin with distance lossnnf_triplet_margin_with_distance_loss
Unfoldnnf_unfold
Adadelta optimizeroptim_adadelta
Adagrad optimizeroptim_adagrad
Implements Adam algorithm.optim_adam
Implements AdamW algorithmoptim_adamw
Averaged Stochastic Gradient Descent optimizeroptim_asgd
LBFGS optimizeroptim_lbfgs
Dummy value indicating a required value.optim_required
RMSprop optimizeroptim_rmsprop
Implements the resilient backpropagation algorithm.optim_rprop
SGD optimizeroptim_sgd
Creates a custom optimizeroptimizer
Creates a new Samplersampler
Dataset wrapping tensors.tensor_dataset
Number of threadsthreads torch_get_num_interop_threads torch_get_num_threads torch_set_num_interop_threads torch_set_num_threads
Abstorch_abs
Absolutetorch_absolute
Acostorch_acos
Acoshtorch_acosh
Adaptive_avg_pool1dtorch_adaptive_avg_pool1d
Addtorch_add
Addbmmtorch_addbmm
Addcdivtorch_addcdiv
Addcmultorch_addcmul
Addmmtorch_addmm
Addmvtorch_addmv
Addrtorch_addr
Allclosetorch_allclose
Amaxtorch_amax
Amintorch_amin
Angletorch_angle
Arangetorch_arange
Arccostorch_arccos
Arccoshtorch_arccosh
Arcsintorch_arcsin
Arcsinhtorch_arcsinh
Arctantorch_arctan
Arctanhtorch_arctanh
Argmaxtorch_argmax
Argmintorch_argmin
Argsorttorch_argsort
As_stridedtorch_as_strided
Asintorch_asin
Asinhtorch_asinh
Atantorch_atan
Atan2torch_atan2
Atanhtorch_atanh
Atleast_1dtorch_atleast_1d
Atleast_2dtorch_atleast_2d
Atleast_3dtorch_atleast_3d
Avg_pool1dtorch_avg_pool1d
Baddbmmtorch_baddbmm
Bartlett_windowtorch_bartlett_window
Bernoullitorch_bernoulli
Bincounttorch_bincount
Bitwise_andtorch_bitwise_and
Bitwise_nottorch_bitwise_not
Bitwise_ortorch_bitwise_or
Bitwise_xortorch_bitwise_xor
Blackman_windowtorch_blackman_window
Block_diagtorch_block_diag
Bmmtorch_bmm
Broadcast_tensorstorch_broadcast_tensors
Bucketizetorch_bucketize
Can_casttorch_can_cast
Cartesian_prodtorch_cartesian_prod
Cattorch_cat
Cdisttorch_cdist
Ceiltorch_ceil
Celutorch_celu
Celu_torch_celu_
Chain_matmultorch_chain_matmul
Channel_shuffletorch_channel_shuffle
Choleskytorch_cholesky
Cholesky_inversetorch_cholesky_inverse
Cholesky_solvetorch_cholesky_solve
Chunktorch_chunk
Clamptorch_clamp
Cliptorch_clip
Clonetorch_clone
Combinationstorch_combinations
Complextorch_complex
Conjtorch_conj
Conv_tbctorch_conv_tbc
Conv_transpose1dtorch_conv_transpose1d
Conv_transpose2dtorch_conv_transpose2d
Conv_transpose3dtorch_conv_transpose3d
Conv1dtorch_conv1d
Conv2dtorch_conv2d
Conv3dtorch_conv3d
Costorch_cos
Coshtorch_cosh
Cosine_similaritytorch_cosine_similarity
Count_nonzerotorch_count_nonzero
Crosstorch_cross
Cummaxtorch_cummax
Cummintorch_cummin
Cumprodtorch_cumprod
Cumsumtorch_cumsum
Deg2radtorch_deg2rad
Dequantizetorch_dequantize
Dettorch_det
Create a Device objecttorch_device
Diagtorch_diag
Diag_embedtorch_diag_embed
Diagflattorch_diagflat
Diagonaltorch_diagonal
Computes the n-th forward difference along the given dimension.torch_diff
Digammatorch_digamma
Disttorch_dist
Divtorch_div
Dividetorch_divide
Dottorch_dot
Dstacktorch_dstack
Torch data typestorch_bool torch_cdouble torch_cfloat torch_cfloat128 torch_cfloat32 torch_cfloat64 torch_chalf torch_double torch_dtype torch_float torch_float16 torch_float32 torch_float64 torch_half torch_int torch_int16 torch_int32 torch_int64 torch_int8 torch_long torch_qint32 torch_qint8 torch_quint8 torch_short torch_uint8
Eigtorch_eig
Einsumtorch_einsum
Emptytorch_empty
Empty_liketorch_empty_like
Empty_stridedtorch_empty_strided
Eqtorch_eq
Equaltorch_equal
Erftorch_erf
Erfctorch_erfc
Erfinvtorch_erfinv
Exptorch_exp
Exp2torch_exp2
Expm1torch_expm1
Eyetorch_eye
Ffttorch_fft_fft
fftfreqtorch_fft_fftfreq
Iffttorch_fft_ifft
Irffttorch_fft_irfft
Rffttorch_fft_rfft
Floating point type infotorch_finfo
Fixtorch_fix
Flattentorch_flatten
Fliptorch_flip
Fliplrtorch_fliplr
Flipudtorch_flipud
Floortorch_floor
Floor_dividetorch_floor_divide
Fmodtorch_fmod
Fractorch_frac
Fulltorch_full
Full_liketorch_full_like
Gathertorch_gather
Gcdtorch_gcd
Getorch_ge
Create a Generator objecttorch_generator
Geqrftorch_geqrf
Gertorch_ger
RNG state managementcuda_get_rng_state cuda_set_rng_state torch_get_rng_state torch_set_rng_state
Greatertorch_greater
Greater_equaltorch_greater_equal
Gttorch_gt
Hamming_windowtorch_hamming_window
Hann_windowtorch_hann_window
Heavisidetorch_heaviside
Histctorch_histc
Hstacktorch_hstack
Hypottorch_hypot
I0torch_i0
Integer type infotorch_iinfo
Imagtorch_imag
Index torch tensorstorch_index
Modify values selected by 'indices'.torch_index_put
In-place version of 'torch_index_put'.torch_index_put_
Index_selecttorch_index_select
A simple exported version of install_path Returns the torch installation path.torch_install_path
Inversetorch_inverse
Is_complextorch_is_complex
Is_floating_pointtorch_is_floating_point
Verifies if torch is installedtorch_is_installed
Is_nonzerotorch_is_nonzero
Isclosetorch_isclose
Isfinitetorch_isfinite
Isinftorch_isinf
Isnantorch_isnan
Isneginftorch_isneginf
Isposinftorch_isposinf
Isrealtorch_isreal
Istfttorch_istft
Kaiser_windowtorch_kaiser_window
Kronecker producttorch_kron
Kthvaluetorch_kthvalue
Creates the corresponding layouttorch_layout torch_sparse_coo torch_strided
Lcmtorch_lcm
Letorch_le
Lerptorch_lerp
Lesstorch_less
Less_equaltorch_less_equal
Lgammatorch_lgamma
Linspacetorch_linspace
Loads a saved objecttorch_load
Logtorch_log
Log10torch_log10
Log1ptorch_log1p
Log2torch_log2
Logaddexptorch_logaddexp
Logaddexp2torch_logaddexp2
Logcumsumexptorch_logcumsumexp
Logdettorch_logdet
Logical_andtorch_logical_and
Logical_nottorch_logical_not
Logical_ortorch_logical_or
Logical_xortorch_logical_xor
Logittorch_logit
Logspacetorch_logspace
Logsumexptorch_logsumexp
Lstsqtorch_lstsq
Lttorch_lt
LUtorch_lu
Lu_solvetorch_lu_solve
Lu_unpacktorch_lu_unpack
Sets the seed for generating random numbers.local_torch_manual_seed torch_manual_seed with_torch_manual_seed
Masked_selecttorch_masked_select
Matmultorch_matmul
Matrix_exptorch_matrix_exp
Matrix_powertorch_matrix_power
Matrix_ranktorch_matrix_rank
Maxtorch_max
Maximumtorch_maximum
Meantorch_mean
Mediantorch_median
Memory formattorch_channels_last_format torch_contiguous_format torch_memory_format torch_preserve_format
Meshgridtorch_meshgrid
Mintorch_min
Minimumtorch_minimum
Mmtorch_mm
Modetorch_mode
Movedimtorch_movedim
Multorch_mul
Multinomialtorch_multinomial
Multiplytorch_multiply
Mvtorch_mv
Mvlgammatorch_mvlgamma
Nanquantiletorch_nanquantile
Nansumtorch_nansum
Narrowtorch_narrow
Netorch_ne
Negtorch_neg
Negativetorch_negative
Nextaftertorch_nextafter
Nonzerotorch_nonzero
Normtorch_norm
Normaltorch_normal
Not_equaltorch_not_equal
Onestorch_ones
Ones_liketorch_ones_like
Orgqrtorch_orgqr
Ormqrtorch_ormqr
Outertorch_outer
Pdisttorch_pdist
Pinversetorch_pinverse
Pixel_shuffletorch_pixel_shuffle
Poissontorch_poisson
Polartorch_polar
Polygammatorch_polygamma
Powtorch_pow
Prodtorch_prod
Promote_typestorch_promote_types
Qrtorch_qr
Creates the corresponding Scheme objecttorch_per_channel_affine torch_per_channel_symmetric torch_per_tensor_affine torch_per_tensor_symmetric torch_qscheme
Quantiletorch_quantile
Quantize_per_channeltorch_quantize_per_channel
Quantize_per_tensortorch_quantize_per_tensor
Rad2degtorch_rad2deg
Randtorch_rand
Rand_liketorch_rand_like
Randinttorch_randint
Randint_liketorch_randint_like
Randntorch_randn
Randn_liketorch_randn_like
Randpermtorch_randperm
Rangetorch_range
Realtorch_real
Reciprocaltorch_reciprocal
Creates the reduction objettorch_reduction torch_reduction_mean torch_reduction_none torch_reduction_sum
Relutorch_relu
Relu_torch_relu_
Remaindertorch_remainder
Renormtorch_renorm
Repeat_interleavetorch_repeat_interleave
Reshapetorch_reshape
Result_typetorch_result_type
Rolltorch_roll
Rot90torch_rot90
Roundtorch_round
Rrelu_torch_rrelu_
Rsqrttorch_rsqrt
Saves an object to a disk file.torch_save
Scalar tensortorch_scalar_tensor
Searchsortedtorch_searchsorted
Selutorch_selu
Selu_torch_selu_
Serialize a torch object returning a raw objecttorch_serialize
Gets and sets the default floating point dtype.torch_get_default_dtype torch_set_default_dtype
Sgntorch_sgn
Sigmoidtorch_sigmoid
Signtorch_sign
Signbittorch_signbit
Sintorch_sin
Sinhtorch_sinh
Slogdettorch_slogdet
Sorttorch_sort
Sparse_coo_tensortorch_sparse_coo_tensor
Splittorch_split
Sqrttorch_sqrt
Squaretorch_square
Squeezetorch_squeeze
Stacktorch_stack
Stdtorch_std
Std_meantorch_std_mean
Stfttorch_stft
Subtorch_sub
Subtracttorch_subtract
Sumtorch_sum
Svdtorch_svd
Ttorch_t
Taketorch_take
Tantorch_tan
Tanhtorch_tanh
Converts R objects to a torch tensortorch_tensor
Creates a tensor from a buffer of memorybuffer_from_torch_tensor torch_tensor_from_buffer
Tensordottorch_tensordot
Threshold_torch_threshold_
Topktorch_topk
Tracetorch_trace
Transposetorch_transpose
Trapztorch_trapz
Triangular_solvetorch_triangular_solve
Triltorch_tril
Tril_indicestorch_tril_indices
Triutorch_triu
Triu_indicestorch_triu_indices
TRUE_dividetorch_true_divide
Trunctorch_trunc
Unbindtorch_unbind
Unique_consecutivetorch_unique_consecutive
Unsafe_chunktorch_unsafe_chunk
Unsafe_splittorch_unsafe_split
Unsqueezetorch_unsqueeze
Vandertorch_vander
Vartorch_var
Var_meantorch_var_mean
Vdottorch_vdot
View_as_complextorch_view_as_complex
View_as_realtorch_view_as_real
Vstacktorch_vstack
Wheretorch_where
Zerostorch_zeros
Zeros_liketorch_zeros_like
Context-manager that enable anomaly detection for the autograd engine.with_detect_anomaly
Enable gradlocal_enable_grad with_enable_grad
Temporarily modify gradient recording.local_no_grad with_no_grad