Tensorflow output u data [output u id = = I] = input u data [input u id = = I] code analysis

import tensorflow as tf import os os.environ['CUDA_VISIBLE_DEVICES'] = '1' with tf.Graph().as_default(), tf.Session() as sess: in...
import tensorflow as tf import os os.environ['CUDA_VISIBLE_DEVICES'] = '1' with tf.Graph().as_default(), tf.Session() as sess: input_data = tf.constant([0.1, 0.2, 0.3]) input_ids = tf.constant([3, 1, 6]) output_data = tf.constant([0., 0., 0., 0., 0.]) output_ids = tf.constant([6, 3, 1, 3, 0]) # From TF v1.13 #s = tf.argsort(input_ids) # Before TF v1.13 s = tf.contrib.framework.argsort(input_ids) input_ids_s = tf.gather(input_ids, s) n = tf.size(input_ids) output_idx_s = tf.minimum(tf.searchsorted(input_ids_s, output_ids), n - 1) searchsorted = tf.searchsorted(input_ids_s, output_ids), n - 1 gather1 = tf.gather(input_ids_s, output_idx_s) equal = tf.equal(output_ids, tf.gather(input_ids_s, output_idx_s)) gather2 = tf.gather(s, output_idx_s) gather3 = tf.gather(input_data, tf.gather(s, output_idx_s)) output_data = tf.where(tf.equal(output_ids, tf.gather(input_ids_s, output_idx_s)), tf.gather(input_data, tf.gather(s, output_idx_s)), output_data) print("output_data",sess.run(output_data)) print("s", sess.run(s)) print("input_ids_s", sess.run(input_ids_s)) print("n", sess.run(n)) print("searchsorted", sess.run(searchsorted)) print("input_idx_s",sess.run(output_idx_s)) print("gather1",sess.run(gather1)) print("equal", sess.run(equal)) print("gather2",sess.run(gather2)) print("gather3",sess.run(gather3)) print("out_idx_s", sess.run(output_idx_s))

The function of this code is a simple numpy Code: output_data[output_id == i] = input_data[input_id == i], from:

https://stackoverflow.com/questions/56073800/output-dataoutput-ids-i-input-datainput-ids-i-in-tensorflow

It's hard to publish papers only using high-end APIs, because you're using mature high-level code that already exists. Even permutations and combinations have been used for a long time.

So tf.core still needs to learn. Have you ever written a new random gradient descent with keras? Some tensorflow are not easy to use, others are written in c + +.

If it's industry, the more mature the api, the better. After all, the more stable, the faster the results, and the more money.

Output: the front is input, for the sake of seeing more clearly.

input_data = tf.constant([0.1, 0.2, 0.3]) input_ids = tf.constant([3, 1, 6]) output_data = tf.constant([0., 0., 0., 0., 0.]) output_ids = tf.constant([6, 3, 1, 3, 0]) output_data [0.3 0.1 0.2 0.1 0. ] s [1 0 2] input_ids_s [1 3 6] n 3 searchsorted (array([2, 1, 0, 1, 0], dtype=int32), 2) input_idx_s [2 1 0 1 0] gather1 [6 3 1 3 1] equal [ True True True True False] gather2 [2 0 1 0 1] gather3 [0.3 0.1 0.2 0.1 0.2] out_idx_s [2 1 0 1 0]

The general idea is as follows: first find the index position of output "ind in the input, and then use this index to gather the data on the input" data.

If the input data is arranged according to the input IDS in advance, it is simple. Just two sentences of code.

There is another question to note. The two ind may not be exactly the same, so we need to use equal to determine whether it exists in output_ind.

s is the sort index of input u id

Input ID is the sort of input ID

n is the size of input IDS

tf.searchsort is to search for the location information of another sensor in a sequenced sensor. So here we return the location of the sensor.

tf.minimum is for

10 November 2019, 12:48 | Views: 3655

Add new comment

For adding a comment, please log in
or create account

0 comments