1- <<<<<<< HEAD
21` master ` [ ![ Build Status] ( http://54.222.242.222:1010/buildStatus/icon?job=TensorGraph/master )] ( http://54.222.242.222:1010/job/TensorGraph/master )
32` develop ` [ ![ Build Status] ( http://54.222.242.222:1010/buildStatus/icon?job=TensorGraph/develop )] ( http://54.222.242.222:1010/job/TensorGraph/develop )
43
@@ -9,34 +8,18 @@ TensorGraph is a simple, lean, and clean framework on TensorFlow for building an
98As deep learning becomes more and more common and the architectures becoming more
109and more complicated, it seems that we need some easy to use framework to quickly
1110build these models and that's what TensorGraph is designed for. It's a very simple
12- =======
13- [ ![ Build Status] ( https://travis-ci.org/hycis/TensorGraphX.svg?branch=master )] ( https://travis-ci.org/hycis/TensorGraphX )
14-
15- # TensorGraphX - Simplicity is Beauty
16- TensorGraphX is a simple, lean, and clean framework on TensorFlow for building any imaginable models.
17-
18- As deep learning becomes more and more common and the architectures becoming more
19- and more complicated, it seems that we need some easy to use framework to quickly
20- build these models and that's what TensorGraphX is designed for. It's a very simple
21- >>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
2211framework that adds a very thin layer above tensorflow. It is for more advanced
2312users who want to have more control and flexibility over his model building and
2413who wants efficiency at the same time.
2514
2615-----
27- <<<<<<< HEAD
2816## Target Audience
2917TensorGraph is targeted more at intermediate to advance users who feel keras or
30- =======
31- ### Target Audience
32- TensorGraphX is targeted more at intermediate to advance users who feel keras or
33- >>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
3418other packages is having too much restrictions and too much black box on model
3519building, and someone who don't want to rewrite the standard layers in tensorflow
3620constantly. Also for enterprise users who want to share deep learning models
3721easily between teams.
3822
39- <<<<<<< HEAD
4023## Documentation
4124
4225You can check out the documentation [ https://skymed.ai/pages/AI-Platform/TensorGraph/ ] ( https://skymed.ai/pages/AI-Platform/TensorGraph/ )
@@ -56,47 +39,16 @@ git clone https://skymed.ai/AI-Platform/TensorGraph.git
5639export PYTHONPATH=/path/to/TensorGraph:$PYTHONPATH
5740```
5841in order for the install to persist via export ` PYTHONPATH ` . Add ` PYTHONPATH=/path/to/TensorGraph:$PYTHONPATH ` to your ` .bashrc ` for linux or
59- =======
60- -----
61- ### Install
62-
63- First you need to install [ tensorflow] ( https://www.tensorflow.org/versions/r0.9/get_started/os_setup.html )
64-
65- To install tensorgraphx simply do via pip
66- ``` bash
67- sudo pip install tensorgraphx
68- ```
69- or for bleeding edge version do
70- ``` bash
71- sudo pip install --upgrade git+https://github.com/hycis/TensorGraphX.git@master
72- ```
73- or simply clone and add to ` PYTHONPATH ` .
74- ``` bash
75- git clone https://github.com/hycis/TensorGraphX.git
76- export PYTHONPATH=/path/to/TensorGraphX:$PYTHONPATH
77- ```
78- in order for the install to persist via export ` PYTHONPATH ` . Add ` PYTHONPATH=/path/to/TensorGraphX:$PYTHONPATH ` to your ` .bashrc ` for linux or
79- >>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
8042` .bash_profile ` for mac. While this method works, you will have to ensure that
8143all the dependencies in [ setup.py] ( setup.py ) are installed.
8244
8345-----
84- <<<<<<< HEAD
8546## Everything in TensorGraph is about Layers
8647Everything in TensorGraph is about layers. A model such as VGG or Resnet can be a layer. An identity block from Resnet or a dense block from Densenet can be a layer as well. Building models in TensorGraph is same as building a toy with lego. For example you can create a new model (layer) by subclass the ` BaseModel ` layer and use ` DenseBlock ` layer inside your ` ModelA ` layer.
8748
8849``` python
8950from tensorgraph.layers import DenseBlock, BaseModel, Flatten, Linear, Softmax
9051import tensorgraph as tg
91- ====== =
92- # ## Everything in TensorGraphX is about Layers
93- Everything in TensorGraphX is about layers. A model such as VGG or Resnet can be a layer. An identity block from Resnet or a dense block from Densenet can be a layer as well. Building models in TensorGraphX is same as building a toy with lego. For example you can create a new model (layer) by subclass the `BaseModel` layer and use `DenseBlock` layer inside your `ModelA` layer.
94-
95- ```python
96- from tensorgraphx.layers import DenseBlock, BaseModel, Flatten, Linear, Softmax
97- import tensorgraphx as tg
98- >>>>>> > e55a706e1467da7b7c54b6d04055aba847f5a2b5
99-
10052class ModelA (BaseModel ):
10153 @BaseModel.init_name_scope
10254 def __init__ (self ):
@@ -133,7 +85,6 @@ y_train = modelb.train_fprop(X_ph)
13385y_test = modelb.test_fprop(X_ph)
13486```
13587
136- <<<<<<< HEAD
13788checkout some well known models in TensorGraph
138891 . [ VGG16 code] ( tensorgraph/layers/backbones.py#L37 ) and [ VGG19 code] ( tensorgraph/layers/backbones.py#L125 ) - [ Very Deep Convolutional Networks for Large-Scale Image Recognition] ( https://arxiv.org/abs/1409.1556 )
139902 . [ DenseNet code] ( tensorgraph/layers/backbones.py#L477 ) - [ Densely Connected Convolutional Networks] ( https://arxiv.org/abs/1608.06993 )
@@ -371,97 +322,26 @@ graph are two separate steps. By splitting them into two separate steps, we ensu
371322the flexibility of building our computational graph without the worry of accidental
372323reinitialization of the ` Variables ` .
373324We defined three types of nodes
374- =======
375- checkout some well known models in TensorGraphX
376- 1 . [ VGG16 code] ( tensorgraphx/layers/backbones.py#L37 ) and [ VGG19 code] ( tensorgraphx/layers/backbones.py#L125 ) - [ Very Deep Convolutional Networks for Large-Scale Image Recognition] ( https://arxiv.org/abs/1409.1556 )
377- 2 . [ DenseNet code] ( tensorgraphx/layers/backbones.py#L477 ) - [ Densely Connected Convolutional Networks] ( https://arxiv.org/abs/1608.06993 )
378- 3 . [ ResNet code] ( tensorgraphx/layers/backbones.py#L225 ) - [ Deep Residual Learning for Image Recognition] ( https://arxiv.org/abs/1512.03385 )
379- 4 . [ Unet code] ( tensorgraphx/layers/backbones.py#L531 ) - [ U-Net: Convolutional Networks for Biomedical Image Segmentation] ( https://arxiv.org/abs/1505.04597 )
380-
381- -----
382- ### TensorGraphX on Multiple GPUS
383- To use tensorgraphx on multiple gpus, you can easily integrate it with [ horovod] ( https://github.com/uber/horovod ) .
384-
385- ``` python
386- import horovod.tensorflow as hvd
387- from tensorflow.python.framework import ops
388- import tensorflow as tf
389- hvd.init()
390-
391- # tensorgraphx model derived previously
392- modelb = ModelB()
393- X_ph = tf.placeholder()
394- y_ph = tf.placeholder()
395- y_train = modelb.train_fprop(X_ph)
396- y_test = modelb.test_fprop(X_ph)
397-
398- train_cost = mse(y_train, y_ph)
399- test_cost = mse(y_test, y_ph)
400-
401- opt = tf.train.RMSPropOptimizer(0.001 )
402- opt = hvd.DistributedOptimizer(opt)
403-
404- # required for BatchNormalization layer
405- update_ops = ops.get_collection(ops.GraphKeys.UPDATE_OPS )
406- with ops.control_dependencies(update_ops):
407- train_op = opt.minimize(train_cost)
408-
409- init_op = tf.group(tf.global_variables_initializer(),
410- tf.local_variables_initializer())
411- bcast = hvd.broadcast_global_variables(0 )
412-
413- # Pin GPU to be used to process local rank (one GPU per process)
414- config = tf.ConfigProto()
415- config.gpu_options.allow_growth = True
416- config.gpu_options.visible_device_list = str (hvd.local_rank())
417-
418- with tf.Session(graph = graph, config = config) as sess:
419- sess.run(init_op)
420- bcast.run()
421-
422- # training model
423- for epoch in range (100 ):
424- for X,y in train_data:
425- _, loss_train = sess.run([train_op, train_cost], feed_dict = {X_ph:X, y_ph:y})
426- ```
427-
428- for a full example on [ tensorgraphx on horovod] ( ./examples/multi_gpus_horovod.py )
429-
430- -----
431- ### How TensorGraphX Works?
432- In TensorGraphX, we defined three types of nodes
433- >>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
434325
4353261 . StartNode : for inputs to the graph
4363272 . HiddenNode : for putting sequential layers inside
4373283 . EndNode : for getting outputs from the model
438329
439- <<<<<<< HEAD
440330We put all the sequential layers into a ` HiddenNode ` , ` HiddenNode ` can be connected
441331to another ` HiddenNode ` or ` StartNode ` , the nodes are connected together to form
442332an architecture. The graph always starts with ` StartNode ` and ends with ` EndNode ` .
443333Once we have defined an architecture, we can use the ` Graph ` object to connect the
444334path we want in the architecture, there can be multiple StartNodes (s1, s2, etc)
445335and multiple EndNodes (e1, e2, etc), we can define which path we want in the
446336entire architecture, example to link from ` s2 ` to ` e1 ` . The ` StartNode ` is where you place
447- =======
448- We put all the sequential layers into a ` HiddenNode ` , and connect the hidden nodes
449- together to build the architecture that you want. The graph always
450- starts with ` StartNode ` and ends with ` EndNode ` . The ` StartNode ` is where you place
451- >>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
452337your starting point, it can be a ` placeholder ` , a symbolic output from another graph,
453338or data output from ` tfrecords ` . ` EndNode ` is where you want to get an output from
454339the graph, where the output can be used to calculate loss or simply just a peek at the
455340outputs at that particular layer. Below shows an
456341[ example] ( examples/example.py ) of building a tensor graph.
457342
458343-----
459- <<<<<<< HEAD
460344## Graph Example
461- =======
462- ### Graph Example
463- >>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
464-
465345<img src =" draw/graph.png " height =" 250 " >
466346
467347First define the ` StartNode ` for putting the input placeholder
@@ -480,29 +360,25 @@ Then define the `HiddenNode` for putting the sequential layers in each `HiddenNo
480360``` python
481361h1 = HiddenNode(prev = [s1, s2],
482362 input_merge_mode = Concat(),
483- <<<<<< < HEAD
484363 layers = [Linear(y2_dim), RELU()])
485364h2 = HiddenNode(prev = [s2],
486365 layers = [Linear(y2_dim), RELU()])
487366h3 = HiddenNode(prev = [h1, h2],
488367 input_merge_mode = Sum(),
489368 layers = [Linear(y1_dim), RELU()])
490- ====== =
491369 layers= [Linear(y1_dim+ y2_dim, y2_dim), RELU()])
492370h2 = HiddenNode(prev = [s2],
493371 layers = [Linear(y2_dim, y2_dim), RELU()])
494372h3 = HiddenNode(prev = [h1, h2],
495373 input_merge_mode = Sum(),
496374 layers = [Linear(y2_dim, y1_dim), RELU()])
497- >>>>>> > e55a706e1467da7b7c54b6d04055aba847f5a2b5
498375```
499376Then define the ` EndNode ` . ` EndNode ` is used to back-trace the graph to connect
500377the nodes together.
501378``` python
502379e1 = EndNode(prev = [h3])
503380e2 = EndNode(prev = [h2])
504381```
505- <<<<<<< HEAD
506382Finally build the graph by putting ` StartNodes ` and ` EndNodes ` into ` Graph ` , we
507383can choose to use the entire architecture by using all the ` StartNodes ` and ` EndNodes `
508384and run the forward propagation to get symbolic output from train mode. The number
@@ -517,19 +393,6 @@ or we can choose which node to start and which node to end, example
517393graph = Graph(start = [s2], end = [e1])
518394o1, = graph.train_fprop()
519395```
520-
521- =======
522- Finally build the graph by putting ` StartNodes ` and ` EndNodes ` into ` Graph `
523- ``` python
524- graph = Graph(start = [s1, s2], end = [e1, e2])
525- ```
526- Run train forward propagation to get symbolic output from train mode. The number
527- of outputs from ` graph.train_fprop ` is the same as the number of ` EndNodes ` put
528- into ` Graph `
529- ``` python
530- o1, o2 = graph.train_fprop()
531- ```
532- >>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
533396Finally build an optimizer to optimize the objective function
534397``` python
535398o1_mse = tf.reduce_mean((y1 - o1)** 2 )
@@ -590,10 +453,6 @@ for a full example on [tensorgraph on horovod](./examples/multi_gpus_horovod.py)
590453
591454-----
592455## Hierachical Softmax Example
593- =======
594- -----
595- ### Hierachical Softmax Example
596- >>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
597456Below is another example for building a more powerful [ hierachical softmax] ( examples/hierachical_softmax.py )
598457whereby the lower hierachical softmax layer can be conditioned on all the upper
599458hierachical softmax layers.
@@ -617,15 +476,12 @@ y3_ph = tf.placeholder('float32', [None, component_dim])
617476# define the graph model structure
618477start = StartNode(input_vars = [x_ph])
619478
620- <<<<<< < HEAD
621479h1 = HiddenNode(prev = [start], layers = [Linear(component_dim), Softmax()])
622480h2 = HiddenNode(prev = [h1], layers = [Linear(component_dim), Softmax()])
623481h3 = HiddenNode(prev = [h2], layers = [Linear(component_dim), Softmax()])
624- ====== =
625482h1 = HiddenNode(prev = [start], layers = [Linear(x_dim, component_dim), Softmax()])
626483h2 = HiddenNode(prev = [h1], layers = [Linear(component_dim, component_dim), Softmax()])
627484h3 = HiddenNode(prev = [h2], layers = [Linear(component_dim, component_dim), Softmax()])
628- >>>>>> > e55a706e1467da7b7c54b6d04055aba847f5a2b5
629485
630486
631487e1 = EndNode(prev = [h1], input_merge_mode = Sum())
@@ -644,15 +500,9 @@ optimizer = tf.train.AdamOptimizer(learning_rate).minimize(mse)
644500```
645501
646502-----
647- <<<<<<< HEAD
648503## Transfer Learning Example
649504Below is an example on transfer learning with bi-modality inputs and merge at
650505the middle layer with shared representation, in fact, TensorGraph can be used
651- =======
652- ### Transfer Learning Example
653- Below is an example on transfer learning with bi-modality inputs and merge at
654- the middle layer with shared representation, in fact, TensorGraphX can be used
655- >>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
656506to build any number of modalities for transfer learning.
657507
658508<img src =" draw/transferlearn.png " height =" 250 " >
@@ -675,17 +525,14 @@ y_ph = tf.placeholder('float32', [None, y_dim])
675525s1 = StartNode(input_vars = [x1_ph])
676526s2 = StartNode(input_vars = [x2_ph])
677527
678- <<<<<< < HEAD
679528h1 = HiddenNode(prev = [s1], layers = [Linear(shared_dim), RELU()])
680529h2 = HiddenNode(prev = [s2], layers = [Linear(shared_dim), RELU()])
681530h3 = HiddenNode(prev = [h1,h2], input_merge_mode = Sum(),
682531 layers = [Linear(y_dim), Softmax()])
683- ====== =
684532h1 = HiddenNode(prev = [s1], layers = [Linear(x1_dim, shared_dim), RELU()])
685533h2 = HiddenNode(prev = [s2], layers = [Linear(x2_dim, shared_dim), RELU()])
686534h3 = HiddenNode(prev = [h1,h2], input_merge_mode = Sum(),
687535 layers = [Linear(shared_dim, y_dim), Softmax()])
688- >>>>>> > e55a706e1467da7b7c54b6d04055aba847f5a2b5
689536
690537e1 = EndNode(prev = [h3])
691538
0 commit comments