You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
**Reconstruction & latent space composition**[](https://colab.research.google.com/drive/15T89qRkZuOFfcHYEa24mlZUuFeni1QqI#scrollTo=izxG2oGAriLK&uniqifier=1)
6
4
7
5
**inference using partial pointcloud**[](https://colab.research.google.com/drive/1ZY5LVsKR8qN99C0EeyyqVnsWWg4v6vPN#scrollTo=f53ea8fc)
8
6
9
-
=======
10
-
<<<<<<< HEAD
11
-
>>>>>>> no lfs
12
7
## Step 0: Set up the environment
13
8
```angular2html
14
9
conda create -n virdo python=3.8
15
10
conda activate virdo
16
-
<<<<<<< HEAD
17
-
=======
18
-
```
19
-
## Colab Examples
20
-
### Reconstruction & Latent Space Composition
21
-
22
-
[](https://colab.research.google.com/drive/15T89qRkZuOFfcHYEa24mlZUuFeni1QqI#scrollTo=izxG2oGAriLK&uniqifier=1)
23
-
24
-
### Inference Using Partial Pointcloud
25
-
[](https://colab.research.google.com/drive/1ZY5LVsKR8qN99C0EeyyqVnsWWg4v6vPN#scrollTo=f53ea8fc)
26
-
27
-
28
-
## Preparation
29
-
Datasets and pretrained models can be downloaded from [here](https://www.dropbox.com/sh/4gnme6f0srhnk23/AAABlA6n8cfyo-GsaiDEqLoba?dl=0). Then put the files as below:
Make sure to install wget ```$ apt-get install wget``` and unzip ```$ apt-get install unzip```
42
-
=======
43
18
44
19
## Step 1: Download pretrained model and dataset
45
-
Makesure to install curl ```$ apt-get install curl``` and unzip ```$ apt-get install unzip```
46
-
>>>>>>> no lfs
20
+
Make sure to install wget ```$ apt-get install wget``` and unzip ```$ apt-get install unzip```
47
21
```angular2html
48
22
source download.sh
49
23
download_dataset
50
24
download_pretrained
51
25
```
52
-
<<<<<<< HEAD
53
-
=======
54
26
### (Optionally) Manual
55
-
>>>>>>> no lfs
56
27
Alternatively, you can manually download the datasets and pretrained models from [here](https://www.dropbox.com/sh/4gnme6f0srhnk23/AAABlA6n8cfyo-GsaiDEqLoba?dl=0). Then put the files as below:
57
28
```
58
29
── VIRDO
@@ -68,7 +39,6 @@ Alternatively, you can manually download the datasets and pretrained models from
68
39
## Step 2: Pretrain nominal shapes
69
40
```
70
41
python pretrain.py --name <log name> --gpu_id 0
71
-
<<<<<<< HEAD
72
42
```
73
43
If you want to check the result of your pretrained model,
74
44
```
@@ -80,6 +50,4 @@ then you will see the nominal reconstructions in /output/ directory.
0 commit comments