|
| 1 | +# Time Series dashboard |
| 2 | + |
| 3 | + |
| 4 | + |
| 5 | +The Time Series Dashboard shows a unified interface containing all your Scalars, |
| 6 | +Histograms, and Images saved via `tf.summary.scalar`, `tf.summary.image`, or |
| 7 | +`tf.summary.histogram`. It enables viewing your 'accuracy' line chart side by |
| 8 | +side with activation histograms and training example images, for example. |
| 9 | + |
| 10 | +Features include: |
| 11 | + |
| 12 | +* Custom run colors: click on the colored circles in the run selector to change |
| 13 | +a run's color. |
| 14 | + |
| 15 | +* Pinned cards: click the 'pin' icon on any card to add it to the pinned section |
| 16 | +at the top for quick comparison. |
| 17 | + |
| 18 | +* Settings: the right pane offers settings for charts and other visualizations. |
| 19 | +Important settings will persist across TensorBoard sessions, when hosted at the |
| 20 | +same URL origin. |
| 21 | + |
| 22 | +* Autocomplete in tag filter: search for specific charts more easily. |
| 23 | + |
| 24 | +## Implementation notes |
| 25 | + |
| 26 | +### Backend |
| 27 | + |
| 28 | +This dashboard relies on direct requests against the endpoints provided in |
| 29 | +[plugins/metrics/](https://github.com/tensorflow/tensorboard/blob/449273b7c3124283c0837a2caa2f887b1dc6235f/tensorboard/plugins/metrics/). |
| 30 | +See full details about the API in |
| 31 | +[plugins/metrics/http_api.md](https://github.com/tensorflow/tensorboard/blob/449273b7c3124283c0837a2caa2f887b1dc6235f/tensorboard/plugins/metrics/http_api.md). |
| 32 | +The backend types and endpoints are intended to fully match those specified in |
| 33 | +TypeScript at |
| 34 | +[webapp/metrics/data_source/metrics_backend_types.ts](https://github.com/tensorflow/tensorboard/blob/449273b7c3124283c0837a2caa2f887b1dc6235f/tensorboard/webapp/metrics/data_source/metrics_backend_types.ts). |
| 35 | + |
| 36 | +### Frontend |
| 37 | + |
| 38 | +Notably, the frontend is aware of a few concepts |
| 39 | + |
| 40 | +* Experiment: the currently selected logdir of a TensorBoard instance represents |
| 41 | + a fixed scope of runs. In this sense, an 'experiment' represents the collection |
| 42 | + of all the runs in the logdir. |
| 43 | +* RunId: users currently provide a run name during summary writing, for example |
| 44 | + 'run_directory_1' in `tf.summary.create_file_writer('run_directory_1')`. To |
| 45 | + support a future with multiple experiments, the RunId concept is a unique |
| 46 | + identifier of a single run in a specific experiment. Multiple experiments can |
| 47 | + have a run with the same name 'run_directory_1', but each run will have its own |
| 48 | + RunId. |
| 49 | +* Card: each rectangular item shown in the Time Series dashboard's main view. |
| 50 | +* Sampled vs. Non-sampled plugins: some plugin types like images are written as |
| 51 | + summaries that contain multiple images at each step. For example, |
| 52 | + `tf.summary.image("eval_images", [image1, image2], step))` writes 2 image |
| 53 | + samples to the same run/tag/step. Non-sampled plugin types such as scalars, do |
| 54 | + not have the concept of a 'sample'. |
| 55 | +* Single-run vs. Multi-run: this frontend concept describes whether a plugin |
| 56 | + type, e.g. images, shows 1 run's worth of data in a single card, or multiple |
| 57 | + runs' worth of data in a single card, e.g. scalars. |
| 58 | + |
| 59 | +As of July 2021, the component hierarchy looks as follows: |
| 60 | + |
| 61 | + |
0 commit comments