Skip to content

Commit f572acd

Browse files
committed
update
1 parent 2d1ccec commit f572acd

File tree

7 files changed

+0
-206
lines changed

7 files changed

+0
-206
lines changed

.gitignore

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,8 @@
11
*pyc
22
.DS_Store
3-
<<<<<<< HEAD
43
doctrees/
54
.buildinfo
65
.remote-sync.json
76
*tensorboard*
87
.coverage.*
98
__pycache__/
10-
=======
11-
>>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5

.travis.yml

Lines changed: 0 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -9,29 +9,17 @@ install:
99
- if [[ "$TRAVIS_PYTHON_VERSION" == "2.7" ]]; then
1010
pip install --only-binary=numpy,scipy numpy nose scipy pytest sklearn;
1111
pip install tensorflow;
12-
<<<<<<< HEAD
1312
pip install git+https://github.com/hycis/TensorGraph.git@master;
14-
=======
15-
pip install git+https://github.com/hycis/TensorGraphX.git@master;
16-
>>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
1713
fi
1814

1915
- if [[ "$TRAVIS_PYTHON_VERSION" == "3.5" ]]; then
2016
pip3 install --only-binary=numpy,scipy numpy nose scipy pytest sklearn;
2117
pip3 install tensorflow;
22-
<<<<<<< HEAD
2318
pip3 install git+https://github.com/hycis/TensorGraph.git@master;
2419
fi
2520

2621
script:
2722
- echo "TensorGraph Testing.."
28-
=======
29-
pip3 install git+https://github.com/hycis/TensorGraphX.git@master;
30-
fi
31-
32-
script:
33-
- echo "TensorGraphX Testing.."
34-
>>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
3523
- if [[ "$TRAVIS_PYTHON_VERSION" == "2.7" ]]; then
3624
python -m pytest test;
3725
fi

LICENCE

Lines changed: 0 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,4 @@
1-
<<<<<<< HEAD
21
Copyright 2015 The TensorGraph Authors. All rights reserved.
3-
=======
4-
Copyright 2015 The TensorGraphX Authors. All rights reserved.
5-
>>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
62

73
Apache License
84
Version 2.0, January 2004
@@ -192,11 +188,7 @@ Copyright 2015 The TensorGraphX Authors. All rights reserved.
192188
same "printed page" as the copyright notice for easier
193189
identification within third-party archives.
194190

195-
<<<<<<< HEAD
196191
Copyright 2015, The TensorGraph Authors.
197-
=======
198-
Copyright 2015, The TensorGraphX Authors.
199-
>>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
200192

201193
Licensed under the Apache License, Version 2.0 (the "License");
202194
you may not use this file except in compliance with the License.

MANIFEST.in

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,2 @@
11
include README.md LICENCE
2-
<<<<<<< HEAD
32
recursive-include tensorgraph *.py
4-
=======
5-
recursive-include tensorgraphx *.py
6-
>>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5

README.md

Lines changed: 0 additions & 153 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,3 @@
1-
<<<<<<< HEAD
21
`master` [![Build Status](http://54.222.242.222:1010/buildStatus/icon?job=TensorGraph/master)](http://54.222.242.222:1010/job/TensorGraph/master)
32
`develop` [![Build Status](http://54.222.242.222:1010/buildStatus/icon?job=TensorGraph/develop)](http://54.222.242.222:1010/job/TensorGraph/develop)
43

@@ -9,34 +8,18 @@ TensorGraph is a simple, lean, and clean framework on TensorFlow for building an
98
As deep learning becomes more and more common and the architectures becoming more
109
and more complicated, it seems that we need some easy to use framework to quickly
1110
build these models and that's what TensorGraph is designed for. It's a very simple
12-
=======
13-
[![Build Status](https://travis-ci.org/hycis/TensorGraphX.svg?branch=master)](https://travis-ci.org/hycis/TensorGraphX)
14-
15-
# TensorGraphX - Simplicity is Beauty
16-
TensorGraphX is a simple, lean, and clean framework on TensorFlow for building any imaginable models.
17-
18-
As deep learning becomes more and more common and the architectures becoming more
19-
and more complicated, it seems that we need some easy to use framework to quickly
20-
build these models and that's what TensorGraphX is designed for. It's a very simple
21-
>>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
2211
framework that adds a very thin layer above tensorflow. It is for more advanced
2312
users who want to have more control and flexibility over his model building and
2413
who wants efficiency at the same time.
2514

2615
-----
27-
<<<<<<< HEAD
2816
## Target Audience
2917
TensorGraph is targeted more at intermediate to advance users who feel keras or
30-
=======
31-
### Target Audience
32-
TensorGraphX is targeted more at intermediate to advance users who feel keras or
33-
>>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
3418
other packages is having too much restrictions and too much black box on model
3519
building, and someone who don't want to rewrite the standard layers in tensorflow
3620
constantly. Also for enterprise users who want to share deep learning models
3721
easily between teams.
3822

39-
<<<<<<< HEAD
4023
## Documentation
4124

4225
You can check out the documentation [https://skymed.ai/pages/AI-Platform/TensorGraph/](https://skymed.ai/pages/AI-Platform/TensorGraph/)
@@ -56,47 +39,16 @@ git clone https://skymed.ai/AI-Platform/TensorGraph.git
5639
export PYTHONPATH=/path/to/TensorGraph:$PYTHONPATH
5740
```
5841
in order for the install to persist via export `PYTHONPATH`. Add `PYTHONPATH=/path/to/TensorGraph:$PYTHONPATH` to your `.bashrc` for linux or
59-
=======
60-
-----
61-
### Install
62-
63-
First you need to install [tensorflow](https://www.tensorflow.org/versions/r0.9/get_started/os_setup.html)
64-
65-
To install tensorgraphx simply do via pip
66-
```bash
67-
sudo pip install tensorgraphx
68-
```
69-
or for bleeding edge version do
70-
```bash
71-
sudo pip install --upgrade git+https://github.com/hycis/TensorGraphX.git@master
72-
```
73-
or simply clone and add to `PYTHONPATH`.
74-
```bash
75-
git clone https://github.com/hycis/TensorGraphX.git
76-
export PYTHONPATH=/path/to/TensorGraphX:$PYTHONPATH
77-
```
78-
in order for the install to persist via export `PYTHONPATH`. Add `PYTHONPATH=/path/to/TensorGraphX:$PYTHONPATH` to your `.bashrc` for linux or
79-
>>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
8042
`.bash_profile` for mac. While this method works, you will have to ensure that
8143
all the dependencies in [setup.py](setup.py) are installed.
8244

8345
-----
84-
<<<<<<< HEAD
8546
## Everything in TensorGraph is about Layers
8647
Everything in TensorGraph is about layers. A model such as VGG or Resnet can be a layer. An identity block from Resnet or a dense block from Densenet can be a layer as well. Building models in TensorGraph is same as building a toy with lego. For example you can create a new model (layer) by subclass the `BaseModel` layer and use `DenseBlock` layer inside your `ModelA` layer.
8748

8849
```python
8950
from tensorgraph.layers import DenseBlock, BaseModel, Flatten, Linear, Softmax
9051
import tensorgraph as tg
91-
=======
92-
### Everything in TensorGraphX is about Layers
93-
Everything in TensorGraphX is about layers. A model such as VGG or Resnet can be a layer. An identity block from Resnet or a dense block from Densenet can be a layer as well. Building models in TensorGraphX is same as building a toy with lego. For example you can create a new model (layer) by subclass the `BaseModel` layer and use `DenseBlock` layer inside your `ModelA` layer.
94-
95-
```python
96-
from tensorgraphx.layers import DenseBlock, BaseModel, Flatten, Linear, Softmax
97-
import tensorgraphx as tg
98-
>>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
99-
10052
class ModelA(BaseModel):
10153
@BaseModel.init_name_scope
10254
def __init__(self):
@@ -133,7 +85,6 @@ y_train = modelb.train_fprop(X_ph)
13385
y_test = modelb.test_fprop(X_ph)
13486
```
13587

136-
<<<<<<< HEAD
13788
checkout some well known models in TensorGraph
13889
1. [VGG16 code](tensorgraph/layers/backbones.py#L37) and [VGG19 code](tensorgraph/layers/backbones.py#L125) - [Very Deep Convolutional Networks for Large-Scale Image Recognition](https://arxiv.org/abs/1409.1556)
13990
2. [DenseNet code](tensorgraph/layers/backbones.py#L477) - [Densely Connected Convolutional Networks](https://arxiv.org/abs/1608.06993)
@@ -371,97 +322,26 @@ graph are two separate steps. By splitting them into two separate steps, we ensu
371322
the flexibility of building our computational graph without the worry of accidental
372323
reinitialization of the `Variables`.
373324
We defined three types of nodes
374-
=======
375-
checkout some well known models in TensorGraphX
376-
1. [VGG16 code](tensorgraphx/layers/backbones.py#L37) and [VGG19 code](tensorgraphx/layers/backbones.py#L125) - [Very Deep Convolutional Networks for Large-Scale Image Recognition](https://arxiv.org/abs/1409.1556)
377-
2. [DenseNet code](tensorgraphx/layers/backbones.py#L477) - [Densely Connected Convolutional Networks](https://arxiv.org/abs/1608.06993)
378-
3. [ResNet code](tensorgraphx/layers/backbones.py#L225) - [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385)
379-
4. [Unet code](tensorgraphx/layers/backbones.py#L531) - [U-Net: Convolutional Networks for Biomedical Image Segmentation](https://arxiv.org/abs/1505.04597)
380-
381-
-----
382-
### TensorGraphX on Multiple GPUS
383-
To use tensorgraphx on multiple gpus, you can easily integrate it with [horovod](https://github.com/uber/horovod).
384-
385-
```python
386-
import horovod.tensorflow as hvd
387-
from tensorflow.python.framework import ops
388-
import tensorflow as tf
389-
hvd.init()
390-
391-
# tensorgraphx model derived previously
392-
modelb = ModelB()
393-
X_ph = tf.placeholder()
394-
y_ph = tf.placeholder()
395-
y_train = modelb.train_fprop(X_ph)
396-
y_test = modelb.test_fprop(X_ph)
397-
398-
train_cost = mse(y_train, y_ph)
399-
test_cost = mse(y_test, y_ph)
400-
401-
opt = tf.train.RMSPropOptimizer(0.001)
402-
opt = hvd.DistributedOptimizer(opt)
403-
404-
# required for BatchNormalization layer
405-
update_ops = ops.get_collection(ops.GraphKeys.UPDATE_OPS)
406-
with ops.control_dependencies(update_ops):
407-
train_op = opt.minimize(train_cost)
408-
409-
init_op = tf.group(tf.global_variables_initializer(),
410-
tf.local_variables_initializer())
411-
bcast = hvd.broadcast_global_variables(0)
412-
413-
# Pin GPU to be used to process local rank (one GPU per process)
414-
config = tf.ConfigProto()
415-
config.gpu_options.allow_growth = True
416-
config.gpu_options.visible_device_list = str(hvd.local_rank())
417-
418-
with tf.Session(graph=graph, config=config) as sess:
419-
sess.run(init_op)
420-
bcast.run()
421-
422-
# training model
423-
for epoch in range(100):
424-
for X,y in train_data:
425-
_, loss_train = sess.run([train_op, train_cost], feed_dict={X_ph:X, y_ph:y})
426-
```
427-
428-
for a full example on [tensorgraphx on horovod](./examples/multi_gpus_horovod.py)
429-
430-
-----
431-
### How TensorGraphX Works?
432-
In TensorGraphX, we defined three types of nodes
433-
>>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
434325

435326
1. StartNode : for inputs to the graph
436327
2. HiddenNode : for putting sequential layers inside
437328
3. EndNode : for getting outputs from the model
438329

439-
<<<<<<< HEAD
440330
We put all the sequential layers into a `HiddenNode`, `HiddenNode` can be connected
441331
to another `HiddenNode` or `StartNode`, the nodes are connected together to form
442332
an architecture. The graph always starts with `StartNode` and ends with `EndNode`.
443333
Once we have defined an architecture, we can use the `Graph` object to connect the
444334
path we want in the architecture, there can be multiple StartNodes (s1, s2, etc)
445335
and multiple EndNodes (e1, e2, etc), we can define which path we want in the
446336
entire architecture, example to link from `s2` to `e1`. The `StartNode` is where you place
447-
=======
448-
We put all the sequential layers into a `HiddenNode`, and connect the hidden nodes
449-
together to build the architecture that you want. The graph always
450-
starts with `StartNode` and ends with `EndNode`. The `StartNode` is where you place
451-
>>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
452337
your starting point, it can be a `placeholder`, a symbolic output from another graph,
453338
or data output from `tfrecords`. `EndNode` is where you want to get an output from
454339
the graph, where the output can be used to calculate loss or simply just a peek at the
455340
outputs at that particular layer. Below shows an
456341
[example](examples/example.py) of building a tensor graph.
457342

458343
-----
459-
<<<<<<< HEAD
460344
## Graph Example
461-
=======
462-
### Graph Example
463-
>>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
464-
465345
<img src="draw/graph.png" height="250">
466346

467347
First define the `StartNode` for putting the input placeholder
@@ -480,29 +360,25 @@ Then define the `HiddenNode` for putting the sequential layers in each `HiddenNo
480360
```python
481361
h1 = HiddenNode(prev=[s1, s2],
482362
input_merge_mode=Concat(),
483-
<<<<<<< HEAD
484363
layers=[Linear(y2_dim), RELU()])
485364
h2 = HiddenNode(prev=[s2],
486365
layers=[Linear(y2_dim), RELU()])
487366
h3 = HiddenNode(prev=[h1, h2],
488367
input_merge_mode=Sum(),
489368
layers=[Linear(y1_dim), RELU()])
490-
=======
491369
layers=[Linear(y1_dim+y2_dim, y2_dim), RELU()])
492370
h2 = HiddenNode(prev=[s2],
493371
layers=[Linear(y2_dim, y2_dim), RELU()])
494372
h3 = HiddenNode(prev=[h1, h2],
495373
input_merge_mode=Sum(),
496374
layers=[Linear(y2_dim, y1_dim), RELU()])
497-
>>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
498375
```
499376
Then define the `EndNode`. `EndNode` is used to back-trace the graph to connect
500377
the nodes together.
501378
```python
502379
e1 = EndNode(prev=[h3])
503380
e2 = EndNode(prev=[h2])
504381
```
505-
<<<<<<< HEAD
506382
Finally build the graph by putting `StartNodes` and `EndNodes` into `Graph`, we
507383
can choose to use the entire architecture by using all the `StartNodes` and `EndNodes`
508384
and run the forward propagation to get symbolic output from train mode. The number
@@ -517,19 +393,6 @@ or we can choose which node to start and which node to end, example
517393
graph = Graph(start=[s2], end=[e1])
518394
o1, = graph.train_fprop()
519395
```
520-
521-
=======
522-
Finally build the graph by putting `StartNodes` and `EndNodes` into `Graph`
523-
```python
524-
graph = Graph(start=[s1, s2], end=[e1, e2])
525-
```
526-
Run train forward propagation to get symbolic output from train mode. The number
527-
of outputs from `graph.train_fprop` is the same as the number of `EndNodes` put
528-
into `Graph`
529-
```python
530-
o1, o2 = graph.train_fprop()
531-
```
532-
>>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
533396
Finally build an optimizer to optimize the objective function
534397
```python
535398
o1_mse = tf.reduce_mean((y1 - o1)**2)
@@ -590,10 +453,6 @@ for a full example on [tensorgraph on horovod](./examples/multi_gpus_horovod.py)
590453

591454
-----
592455
## Hierachical Softmax Example
593-
=======
594-
-----
595-
### Hierachical Softmax Example
596-
>>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
597456
Below is another example for building a more powerful [hierachical softmax](examples/hierachical_softmax.py)
598457
whereby the lower hierachical softmax layer can be conditioned on all the upper
599458
hierachical softmax layers.
@@ -617,15 +476,12 @@ y3_ph = tf.placeholder('float32', [None, component_dim])
617476
# define the graph model structure
618477
start = StartNode(input_vars=[x_ph])
619478

620-
<<<<<<< HEAD
621479
h1 = HiddenNode(prev=[start], layers=[Linear(component_dim), Softmax()])
622480
h2 = HiddenNode(prev=[h1], layers=[Linear(component_dim), Softmax()])
623481
h3 = HiddenNode(prev=[h2], layers=[Linear(component_dim), Softmax()])
624-
=======
625482
h1 = HiddenNode(prev=[start], layers=[Linear(x_dim, component_dim), Softmax()])
626483
h2 = HiddenNode(prev=[h1], layers=[Linear(component_dim, component_dim), Softmax()])
627484
h3 = HiddenNode(prev=[h2], layers=[Linear(component_dim, component_dim), Softmax()])
628-
>>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
629485

630486

631487
e1 = EndNode(prev=[h1], input_merge_mode=Sum())
@@ -644,15 +500,9 @@ optimizer = tf.train.AdamOptimizer(learning_rate).minimize(mse)
644500
```
645501

646502
-----
647-
<<<<<<< HEAD
648503
## Transfer Learning Example
649504
Below is an example on transfer learning with bi-modality inputs and merge at
650505
the middle layer with shared representation, in fact, TensorGraph can be used
651-
=======
652-
### Transfer Learning Example
653-
Below is an example on transfer learning with bi-modality inputs and merge at
654-
the middle layer with shared representation, in fact, TensorGraphX can be used
655-
>>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
656506
to build any number of modalities for transfer learning.
657507

658508
<img src="draw/transferlearn.png" height="250">
@@ -675,17 +525,14 @@ y_ph = tf.placeholder('float32', [None, y_dim])
675525
s1 = StartNode(input_vars=[x1_ph])
676526
s2 = StartNode(input_vars=[x2_ph])
677527

678-
<<<<<<< HEAD
679528
h1 = HiddenNode(prev=[s1], layers=[Linear(shared_dim), RELU()])
680529
h2 = HiddenNode(prev=[s2], layers=[Linear(shared_dim), RELU()])
681530
h3 = HiddenNode(prev=[h1,h2], input_merge_mode=Sum(),
682531
layers=[Linear(y_dim), Softmax()])
683-
=======
684532
h1 = HiddenNode(prev=[s1], layers=[Linear(x1_dim, shared_dim), RELU()])
685533
h2 = HiddenNode(prev=[s2], layers=[Linear(x2_dim, shared_dim), RELU()])
686534
h3 = HiddenNode(prev=[h1,h2], input_merge_mode=Sum(),
687535
layers=[Linear(shared_dim, y_dim), Softmax()])
688-
>>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
689536

690537
e1 = EndNode(prev=[h3])
691538

pipupdate.sh

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -2,13 +2,7 @@
22

33
version=$1
44
git tag $version -m "update to version $version"
5-
<<<<<<< HEAD
65
git push --tag
76

87
# python setup.py register -r pypi
98
# python setup.py sdist upload -r pypi
10-
=======
11-
git push --tag origin master
12-
python setup.py register -r pypi
13-
python setup.py sdist upload -r pypi
14-
>>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5

0 commit comments

Comments
 (0)