I want to convert an existing model to one that will run on a USB stick ‘accelerator’ called Coral. Conversion to tflite is needed for any small devices like these.
I’ve not managed this yet, but here are some notes. I’ve figured out some of it, but come unstuck in that some operations (‘ops’) are not supported in tflite yet. But maybe this is still useful to someone, and I want to remember what I did.
I’m trying to change a tensorflow model – for which I only have .meta and .index files – to one with .pb files or variables, which seems to be called a ‘savedModel’. These have some interoperability, and appear to be a prerequisite for making a tflite model.
Here’s what I have to start with:
Conversion to SavedModel
First, create a savedModel (this code is for Tensorflow 1.3, but 2.0 is a simple conversion using a command-line tool).
import tensorflow as tf
model_path = 'LJ01-1/model_gs_933k'
output_node_names = ['Merge_1/MergeSummary']
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Restore the graph
saver = tf.train.import_meta_graph(model_path+'.meta')
# Load weights
# Freeze the graph
frozen_graph_def = tf.graph_util.convert_variables_to_constants(
builder = tf.saved_model.builder.SavedModelBuilder('new_models')
op = sess.graph.get_operations()
input_tensor = [m.values() for m in op]
output_tensor = [m.values() for m in op][len(op)-1]
tensor_info_input = tf.saved_model.utils.build_tensor_info(input_tensor)
tensor_info_output = tf.saved_model.utils.build_tensor_info(output_tensor)
prediction_signature = (
output_node_names = [n.name for n in tf.get_default_graph().as_graph_def().node]
to find out the names of the input and output ops.
That gives you a directory (new_models) like
Conversion to tflite
Once you have that, then you can use the command-line tool tflite_convert (examples) –
tflite_convert --saved_model_dir=new_models --output_file=model.tflite --enable_select_tf_ops
This does the conversion to tflite. And it will probably fail, e.g. mine did this:
Some of the operators in the model are not supported by the standard TensorFlow Lite runtime. If those are native TensorFlow operators, you might be able to use the extended runtime by passing --enable_select_tf_ops, or by setting target_ops=TFLITE_BUILTINS,SELECT_TF_OPS when calling tf.lite.TFLiteConverter(). Otherwise, if you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.lite.TFLiteConverter(). Here is a list of builtin operators you are using: ABS, ADD, CAST, CONCATENATION, CONV_2D, DIV, EXP, EXPAND_DIMS, FLOOR, GATHER, GREATER_EQUAL, LOGISTIC, MEAN, MUL, NEG, NOT_EQUAL, PAD, PADV2, RSQRT, SELECT, SHAPE, SOFTMAX, SPLIT, SQUARED_DIFFERENCE, SQUEEZE, STRIDED_SLICE, SUB, SUM, TRANSPOSE, ZEROS_LIKE. Here is a list of operators for which you will need custom implementations: BatchMatMul, FIFOQueueV2, ImageSummary, Log1p, MergeSummary, PaddingFIFOQueueV2, QueueDequeueV2, QueueSizeV2, RandomUniform, ScalarSummary.
You can add –allow_custom_ops to that, which will let everything through – but it still won’t work if there are ops that are not tflite supported – you have to write custom operators for the ones that don’t yet work (I’ve not tried this).
But it’s still useful to use –allow_custom_ops, i.e.
tflite_convert --saved_model_dir=new_models --output_file=model.tflite --enable_select_tf_ops --allow_custom_ops
because you can visualise the graph once you have a tflite file, using netron. Which is quite interesting, although I suspect it doesn’t work for the bits which it passed through but doesn’t support.
>>> import netron
Serving 'model.tflite' at http://localhost:8080
Update – I forgot a link:
“This document outlines how to use TensorFlow Lite with select TensorFlow ops. Note that this feature is experimental and is under active development. “