Notes on Tensorflow Programmers Guide.
Tensorflow - notes on programmers guide.
1. Basic concepts:
TensorFlow separates definition of compuations from their execution. It first assemble a graph (tf.graph) and then use a session to execute operations in the graph (tf.session). A graph can be separated into subgraphs and execution can be computed in distributed manner.- Tensor: n-dimensional array (not mathematically rigorous).
- Nodes (graphs): operators, variables and constants.
- Edges (graphs): tensors.
- tf.Session(): it encapsulates the environment in which operation objects are executed, and tensor objects are evaluated. It also allocate memory to store the current values of varaibles.
- tf.Graph(): creates a graph. tf.get_default_graph() acquires the default graph. Mixing this user created graph with default graph is prone to errors.
- tf.constant(): sets a specific value and cannot be changed. Note that this makes loading graphs expensive when constants are big.
- tf.Variable() or tf.get_variable(): a class with many ops such as init op, read op, write op and more. tf.get_variable() is usually better. Initialization must be done!
- tf.placeholder(): a class where we can later supply their own data when they need to execute the computation. Data is supplemented to placeholders using dictionary.
$ python [yourprogram].py
$ tensorboard --logdir="./graphs" --port XXXX
Questions and tips (to be updated):
- What if I want to build more than one graph? This is a bad idea because multiple graphs require multiple sessions, each will try to use all available resources by default. Furthermore, we cannot pass data between them without passing them through python, which does not work in distributed sense. Just better to have disconnected subgraphs within one graph!
- Never mix default graph and user created graph. It is just error prone.
- Use tf data types instead of numpy and python as much as possible. TensorFlow has to infer pyhton native types and numpy arrays are not GPU compatible.
- Use tf.constant() only for preimitive types and do not make it big.
- Lazing loading must always be avoided.
- "Datasets" are preferable method of streaming data into a model. Use placeholders for only simple experiments.
2. High level API - Eager Execution.
Eager execution is an impreative programming enrionement that evaluates operations immediately instead of building graph and executing them. Mainly it is useful to debug.- tf.enable_eager_execution(): it starts eager execution and put them to the beginning.
- import tensorflow.contrib.eager as tfe: tfe module contatins symbols available to both eager and graph execution environements and is useful for coding to work with graphs.
3. High level API - Importing data: tf.data API.
The tf.data API has two abstractions (1) tf.data.Dataset (sequence of elements where each element is an training example, and it can create a dataset from tf.Tensor objects and also from tf.data.Dataset objects) and (2) tf.data.Iterator (a main way to extract elements from a dataset).Dataset structure: a dataset comprieses elements with the same structure. An element comtains tensor objects called components.
- tf.data.Dataset.from_tensor_slices(): creates a dataset object with a type and shape.
- tf.data.Dasaset.zip(): combines dataset objects.
- One-shot: simplest & only support iterating once through a dataset. Only type that is easily usable with Estimator APIs.
- Initializable: enables parameterization of the definition in the dataset using placeholder.
- Reinitializable: enables initiliazable iterator across multiple dataset objects including training and validation sets within the same iterator object.
- Feedable: the same as reinitializable but does not require to initialize the iterator from the start to switch betwwen iterators.
- Numpy arrays: load numpy array and save them as dataset object. Then use one of the iterators.
- TFRecord data: the tf.data API supports a variety of file formats so that you can process large datasets that do not fit in memory. The tf.data.TFRecordDataset class enables streaming over the contents of one or more TFRecordfiles as part of an input pipeline.
- Text data: a high level API (tf.data.TextLineDataset) that enables easy handling of text files.
- always use try-except block while iterating for training.
- tf.contrib.data.make_saveable_from_iterator function to create an object from an iterator. Then, tf.train.Saver variable can then be used.
How to decode image data and resize it?
- tf.image.decode_image and tf.image.resize_images can be used as parse function to tf.dataset.map. It is to convert images of different sizes to a common size, so that they may be batched into a fixed size.
- tf.train.Example protocol buffer messages from aTFRecord-format file can be extracted (e.g. image and labels). tf.dataset.map used with a function that does it. A scalar string can be transformed into a pair of a scalar string and a scalar integer, representing an image and its label as an example.
- tf.py_func() operation in a Dataset.map() transformation can be done if we want to call upon external Python libraries to parse our input data.
- Dataset.batch() can batch stacks of n consecutive elements of a dataset into a single element.
- Dataset.padded_batch() can be used when tensors have varying size.
- Dataset.repeat() create a dataset that repeats its input for 10 epochs.
- tf.errors.OutOfRangeError with try-except wrap during train must be done when using this.
- Dataset.shuffle() randomly shuffles the input datset with a fixed-size buffer. Next elements are randomly (with uniform) taken from that buffer.
4. High level API - Estimators.
Estimators encapsulate the following actions: training, evaluation, prediction and export for serving. There exists pre-made or custom made estimators based on calss tf.estimator.Estimator class.Using pre-made estimators involves following steps.
- Data set importing functions.
- Defining the feautre columns.
- Instantiate the relevant pre-made Estimator.
- Call a training, evaluation or inference method.
- Data set importing functions
- Defining the feature columns.
- A model function.
- Training, evaluation and prediction functions.
- Instantiating the Estimator and calling a training, evaluation or inference method.
5. Low level API - Tensors and variables.
Some of the useful API or functions for tf.Tensor objects:- tf.rank(): determines the rank of tf.Tensor object.
- tf.Tensor slices: the same as python slicing syntax.
- tf.shape(): determines the shape of tf.Tensor object.
- tf.reshape(): reshapes the tf.Tensor object.
- tf.cast(): casting syntax for tf.Tesnor object.
- tf.eval(): fetching the value of tf.Tensor.
- tf.Print(): printing the tf.Tensor object.
- Variable creation: tf.get_variable() recommended. Default type is tf.float32 and can be initialized via tf.glorot_uniform_initializer or tf.zeros_initializer. Input is typically a shape, name, type and initializer.
- Variable collections: tf.GraphKeys.GLOBAL_VARIABLES (variables that can be shared across multiple devices) and tf.GraphKeys.TRIANABLE_VARIABLES (variables for which TensorFlow will calculate gradients) are default options where each variables are collected. This default can be changed.
- Device placements: all variables can be manually placed on different devices. Variables must be in the correct devices in distributed settings - tf.train.replica_device_setter automatically place variables in parameter servers.
- Variable initialization: variables can be initialized with session.run(tf.global_variables_initializer()). If the orders of initializations are important then do not use this but initialize variables individually.
- Sharing variables: sharing of variables can be done via (1) explictly passing tf.Variable objects around and (2) implicitly wrapping tf.Variable objects within tf.variable_scope objects.
- Operations: many tensor flow operation functions exist e.g. tf.add(). Always use them.
6. Low level API - Graphs and Sessions.
Some of the useful information about graphs:- Naming operations: naming operations helps for visualizing the graphs. Scopes can add a prefix to all operations created in the same context.
- Graphs are typically visualized in TensorBoard.
- Creating a session and executing oeprations is done with tf.Session.
7. Low level API - Save and Restore.
To save and restore models, the tf.train.Saver class is typically used. The tf.saved_model.simple_save function is one way to build a saved model suitable for serving.Saving and restoring the variables:
- Saving variables: tf.train.Saver() is initialized as ops before the session. Then, during the session saver.save() is used to save the session in a specific dir.
- Restore variables: tf.train.Saver() is again initialized as ops before the session. Then, during the session, saver.restore() is used to restore the previously saved variables.
- Choosing variables to save and restore: arguments to tf.train.Saver() function with a name and variable.
- Checkpoint: We can quickly inspect variables in a checkpoint with the inspect_checkpoint library.
- Simple save: tf.saved_model.simple_save() function to save variables, graphs and its metadata. It can be loaded by TensorFlow serving and supports the Predict API. If it doesnt cover your need then use manual builder APIs to create a SaveModel.
- Manually build a saved model: tf.saved_model.builder.SavedModelBuilder class provides functionality to save multiple MetaGraphDef. Examples for saving metagraphdef for training and inference are given.
- Python - tf.saved_model.loader.load(): arguments are the session, the tags for the MetaGraphDef and the location of the saved model. A specific MetaGraphDef will be restored into the supplied session.
- C++ - LoadSavedModel(): SavedModelBundle class is used with the arguments such as session options, run options, directory, tag and poitner to the SavedModelBundle instance. In general, inference with C++ comes with less overhead and loading the model from python to C++ is therefore incorporated in a accessible way.
- TensorFlow serving: The saved model can also be loaded as the TensorFlow server binary.
- Specify the output nodes and the corresponding APIs: serving_input_receiver_fn() that accepts inference requests and prepares them fothe model by adding placeholders to the graph and any additional ops needed to convert data from the input format into the feature Tensors. It must return tf.estimator.export.ServingInputReceiver object.
- Export the model to the SavedModel format: estimator.export_savedmodel() writes a SavedModel into the one containing MetaGraphDef.
- Serve the model from a local server and request predictions: Request can be made to perform prediction or inference (not so useful for robots).
- show: this shows a compuation on a MetaGraphDef in a SavedModel .
- run: this runs a compuation on a MetaGraphDef.
- overwrite or --outdir option: this can be used to save the outputs.
- assets: it is a subfolder containing auxiliary files, such as vocabularies. Assets are copied to the SavedModel location and can be read when loading a specific MetaGraphDef.
- variables: it is subfolder where high level libraries and users can add their own assets that co-exist with the model, but are not loaded by the graph.
- When one saves a model in SavedModel format, TensorFlow creates a SavedModel directory consisting of a specific subdirectoreis and files.
8. Debugging.
tf_debug is the API to debug TensorFlow. One needs to write sess = tf_debug.LocalCLIDebugWrapperSession(sess) to inspect the graphs internal state and register special filters for tensor vlaues. Fins the frequently used command line tools.
Use https://github.com/MrGemy95/Tensorflow-Project-Template for big projects.
Comments
Post a Comment