.. _l-onnx-doc-SparseSoftmaxCrossEntropyGrad: ============================= SparseSoftmaxCrossEntropyGrad ============================= .. contents:: :local: .. _l-onnx-op-sparsesoftmaxcrossentropygrad-9: SparseSoftmaxCrossEntropyGrad - 9 ================================= **Version** * **name**: `SparseSoftmaxCrossEntropyGrad (GitHub) `_ * **domain**: **main** * **since_version**: **9** * **function**: * **support_level**: * **shape inference**: This version of the operator has been available **since version 9**. **Summary** SparseSoftmaxCrossEntropyGrad **Attributes** * **reduction**: Type of reduction to apply to loss: none, sum, mean(default). 'none': the output is the loss for each sample in the batch.'sum': the output will be summed. 'mean': the sum of the output will be divided by the batch_size. Default value is ``?``. **Inputs** Between 3 and 4 inputs. * **dY** (heterogeneous) - **T**: gradient of Y * **log_prob** (heterogeneous) - **T**: logsoftmax(logits), (N+1)-D input of shape (batch_size). * **label** (heterogeneous) - **Tind**: label is N-D input whose shape should match that of logits. It is a tensor of nonnegative integers, where each element is the nonnegative integer label for the element of the batch. * **weight** (optional, heterogeneous) - **T**: weight for each sample. The shape is the same as label's **Outputs** * **d_logits** (heterogeneous) - **T**: gradient of logits **Examples**