Skip to content

Commit bfc7a55

Browse files
committed
train
0 parents  commit bfc7a55

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

45 files changed

+2868
-0
lines changed

.gitignore

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
*.ply
2+
*.pickle
3+
*.zip
4+
*.pth
5+
.idea
6+
__pycache__/

README.md

Lines changed: 46 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,46 @@
1+
# VIRDO
2+
3+
## Quick Start
4+
**Reconstruction & latent space composition** [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/15T89qRkZuOFfcHYEa24mlZUuFeni1QqI#scrollTo=izxG2oGAriLK&uniqifier=1)
5+
6+
**inference using partial pointcloud** [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1ZY5LVsKR8qN99C0EeyyqVnsWWg4v6vPN#scrollTo=f53ea8fc)
7+
8+
## Step 0: Set up the environment
9+
```angular2html
10+
conda create -n virdo python=3.8
11+
conda activate virdo
12+
conda install pytorch==1.7.0 torchvision==0.8.0 torchaudio==0.7.0 cudatoolkit=11.0 -c pytorch
13+
conda install pytorch3d=0.5.0 -c pytorch3d
14+
pip install open3d==0.14.1
15+
pip install plyfile==0.7.4
16+
pip install scikit-image
17+
```
18+
19+
## Step 1: Download pretrained model and dataset
20+
Make sure to install wget ```$ apt-get install wget``` and unzip ```$ apt-get install unzip```
21+
```angular2html
22+
source download.sh
23+
download_dataset
24+
download_pretrained
25+
```
26+
Alternatively, you can manually download the datasets and pretrained models from [here](https://www.dropbox.com/sh/4gnme6f0srhnk23/AAABlA6n8cfyo-GsaiDEqLoba?dl=0). Then put the files as below:
27+
```
28+
── VIRDO
29+
│ ├── data
30+
│ │ │── virdo_simul_dataset.pickle
31+
│ ├── pretrained_model
32+
│ │ │── force_final.pth
33+
│ │ │── object_final.pth
34+
│ │ │── deform_final.pth
35+
36+
```
37+
38+
## Step 2: Pretrain nominal shapes
39+
```
40+
python pretrain.py --name <log name> --gpu_id 0
41+
```
42+
If you want to check the result of your pretrained model,
43+
```
44+
python pretrain.py --checkpoints_dir <dir> --gpu_id 0
45+
```
46+
then you will see the nominal reconstructions in /output/ directory.

__init__.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
from utilities import visualization

colab/inference.ipynb

Lines changed: 706 additions & 0 deletions
Large diffs are not rendered by default.

colab/main_example.ipynb

Lines changed: 1 addition & 0 deletions
Large diffs are not rendered by default.

data/dataset_readme.txt

Lines changed: 57 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,57 @@
1+
##########################################
2+
######## VIRDO DATASET ########
3+
######## CREATER : YOUNGSUN WI ########
4+
######## CONTACT: yswi@umich.edu ########
5+
###########################################
6+
7+
1. DESCRIPTION: This dataset is written in 'dtype=torch.float64'. This dataset consists of total 144 deformation scenes from 6 different objects generated through MATLAB. It is divided into 'train' and 'test' dataset, where data['train'][OBJECT IDX = i][DEFORM IDX = j ] and data['test'][OBJECT IDX = i][DEFORM IDX = j ] indicates the same scene, but they are two different subsets of query points.
8+
9+
10+
2. STRUCTURE: The dataset structure is as follows:
11+
VIRDO_simul_dataset = {
12+
'train':{
13+
<OBJECT IDX>: {
14+
'nominal': {
15+
'coords': tensor([1, M, 3]),
16+
'normals': tensor([1, M, 3]),
17+
'gt': tensor([1, M, 3]),
18+
'scale': float
19+
},
20+
<DEFORM IDX>: {
21+
'coords': tensor([1, M, 3]),
22+
'contact': tensor([1, M_c, 3]),
23+
'normals': tensor([1, M, 3]),
24+
'gt': tensor([1, M, 3]),
25+
'scale': float,
26+
'reaction': tensor([1,3]
27+
},
28+
},
29+
30+
},
31+
'test':{
32+
<OBJECT IDX>: {
33+
'nominal': {
34+
'coords': tensor([1, M, 3]),
35+
'normals': tensor([1, M, 3]),
36+
'gt': tensor([1, M, 3]),
37+
'scale': float
38+
},
39+
<DEFORM IDX>: {
40+
'coords': tensor([1, M, 3]),
41+
'contact': tensor([1, M_c, 3]),
42+
'normals': tensor([1, M, 3]),
43+
'gt': tensor([1, M, 3]),
44+
'scale': float,
45+
'reaction': tensor([1,3]
46+
},
47+
},
48+
49+
},
50+
}
51+
52+
* <OBJECT IDX> = Interger from 0 ~ 5. Each number indicates different object.
53+
* <DEFORM IDX> = Unique integer for each deformation.
54+
* M = total points (on-surface + off-surface)
55+
* M_c = a subset of on-surface points that are in contact
56+
* [:,i,:] elements of 'coords', 'normals', and 'gt' refers ith query point of a scene. To get on-surface points of data_def = data['train'][<OBJECT IDX>][<DEFORM IDX>], you should do data_def['coords'][:,torch.where(data_def['gt'] == 0)[1],:].
57+

data/spatula_specs.json

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
{
2+
"train": {
3+
"spatula1_train": [-0.004, 0.0, -0.007],
4+
"spatula2_train": [-0.01, -0.002, 0.0],
5+
"spatula3_train": [-0.007, -0.00, 0.0],
6+
"spatula4_train": [-0.01, 0.0, 0.0],
7+
},
8+
"generalization": {},
9+
}

diff_operators.py

Lines changed: 71 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,71 @@
1+
import torch
2+
from torch.autograd import grad
3+
4+
5+
def hessian(y, x):
6+
"""hessian of y wrt x
7+
y: shape (meta_batch_size, num_observations, channels)
8+
x: shape (meta_batch_size, num_observations, 2)
9+
"""
10+
meta_batch_size, num_observations = y.shape[:2]
11+
grad_y = torch.ones_like(y[..., 0]).to(y.device)
12+
h = torch.zeros(
13+
meta_batch_size, num_observations, y.shape[-1], x.shape[-1], x.shape[-1]
14+
).to(y.device)
15+
for i in range(y.shape[-1]):
16+
# calculate dydx over batches for each feature value of y
17+
dydx = grad(y[..., i], x, grad_y, create_graph=True)[0]
18+
19+
# calculate hessian on y for each x value
20+
for j in range(x.shape[-1]):
21+
h[..., i, j, :] = grad(dydx[..., j], x, grad_y, create_graph=True)[0][
22+
..., :
23+
]
24+
25+
status = 0
26+
if torch.any(torch.isnan(h)):
27+
status = -1
28+
return h, status
29+
30+
31+
def laplace(y, x):
32+
grad = gradient(y, x)
33+
return divergence(grad, x)
34+
35+
36+
def divergence(y, x):
37+
div = 0.0
38+
for i in range(y.shape[-1]):
39+
div += grad(y[..., i], x, torch.ones_like(y[..., i]), create_graph=True)[0][
40+
..., i : i + 1
41+
]
42+
return div
43+
44+
45+
def gradient(y, x, grad_outputs=None):
46+
y = y.squeeze(0)
47+
if grad_outputs is None:
48+
grad_outputs = torch.ones_like(y)
49+
grad = torch.autograd.grad(
50+
y, x, grad_outputs=grad_outputs, create_graph=True, allow_unused=True
51+
)
52+
53+
return grad[0]
54+
55+
56+
def jacobian(y, x):
57+
"""jacobian of y wrt x"""
58+
meta_batch_size, num_observations = y.shape[:2]
59+
jac = torch.zeros(meta_batch_size, num_observations, y.shape[-1], x.shape[-1]).to(
60+
y.device
61+
) # (meta_batch_size*num_points, 2, 2)
62+
for i in range(y.shape[-1]):
63+
# calculate dydx over batches for each feature value of y
64+
y_flat = y[..., i].view(-1, 1)
65+
jac[:, :, i, :] = grad(y_flat, x, torch.ones_like(y_flat), create_graph=True)[0]
66+
67+
status = 0
68+
if torch.any(torch.isnan(jac)):
69+
status = -1
70+
71+
return jac, status

download.sh

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
download_dataset(){
2+
(
3+
cd 'data'
4+
wget https://www.dropbox.com/s/xtyotb99gqb72xp/virdo_simul_dataset.pickle
5+
)
6+
}
7+
8+
download_pretrained(){
9+
(
10+
cd 'pretrained_model'
11+
wget https://www.dropbox.com/s/7h2sqc6ouzlk94y/pretrained_model.zip
12+
unzip -o pretrained_model.zip
13+
)
14+
}

environment.txt

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
torch==1.7.0
2+
torchvision==0.8.1
3+
matplotlib==3.4.3
4+
pytorch3d
5+
open3d
6+
plyfile
7+
scikit-image
8+
tqdm==4.62.1
9+
scipy==1.7.1

0 commit comments

Comments
 (0)