Tensor
import torch
- similar to Numpy’s
ndarray
- can also be used on a GPU to accelerate computing
Operations
similar to operations of Numpy’s
ndarray
x = torch.zeros(5, 3, dtype=torch.float) y = torch.randn_like(x, dtype=torch.float) x + y
size:
size()
x.size()
in-place operation: post-fixed with an
_
y.add_(x)
Numpy-like indexing
x[:, 1]
resize/reshape:
view()
a = torch.ones(4, 4) b = a.view(-1, 8) # the size -1 is inferred from other dimensions # b should have size (2, 8)
For one element tensor, use
.item
to get the value as a python number:x = torch.rand(1) x.itemp
PyTorch $\leftrightarrow$ Numpy
Torch Tensor $\rightarrow$ Numpy Array
Use numpy()
a = torch.ones(5) # torch tensor
b = a.numpy() # convert to numpy array
If both pytorch tensor and numpy array are on CPU, change one of them will change the another.
Torch Tensor $\leftarrow$ Numpy Array
Use from_numpy()
import numpy as np
a = np.ones(5) # numpy array
b = torch.from_numpy(a) # convert to torch tensor
Change the np array will change the torch tensor automatically
Note: All the Tensors on the CPU except a CharTensor support converting to NumPy and back.
CUDA Tensors
- Tensors can be moved onto any device using the
.to
method. - Use
torch.device
objects to move tensors in and out of GPU
if torch.cuda.is_available():
device = torch.device("cuda")
y = torch.ones_like(x, device=device) # Create a tensor on GPU directly
x = x.to(device) # Move a tensor to GPU use .to()
z = x + y
z.to("cpu", torch.double)