.. _l-table-operator-com-microsoft: operator table for domain com.microsoft ======================================= .. list-table:: operators for domain com.microsoft :widths: 10 10 :header-rows: 1 * - operator - versions * - AdamOptimizer - :ref:`1 ` * - AdamWOptimizer - :ref:`1 ` * - AdasumAllReduce - :ref:`1 ` * - All - :ref:`1 ` * - Attention - :ref:`1 ` * - AttnLSTM - :ref:`1 ` * - BatchNormInternal - :ref:`1 ` * - BatchNormalizationGrad - :ref:`1 ` * - BeamSearch - :ref:`1 ` * - BiasDropout - :ref:`1 ` * - BiasFastGeluGrad_dX - :ref:`1 ` * - BiasGelu - :ref:`1 ` * - BiasGeluGrad_dX - :ref:`1 ` * - BiasSoftmax - :ref:`1 ` * - BiasSoftmaxDropout - :ref:`1 ` * - BifurcationDetector - :ref:`1 ` * - BitmaskBiasDropout - :ref:`1 ` * - BitmaskDropout - :ref:`1 ` * - BitmaskDropoutGrad - :ref:`1 ` * - BroadcastGradientArgs - :ref:`1 ` * - CDist - :ref:`1 ` * - ComplexMul - :ref:`1 ` * - ComplexMulConj - :ref:`1 ` * - ConcatTraining - :ref:`1 ` * - ConvGrad - :ref:`1 ` * - ConvTransposeWithDynamicPads - :ref:`1 ` * - CropAndResize - :ref:`1 ` * - DecoderAttention - :ref:`1 ` * - DequantizeBFP - :ref:`1 ` * - DequantizeLinear - :ref:`1 ` * - DequantizeWithOrder - :ref:`1 ` * - DivGrad - :ref:`1 ` * - DropoutGrad - :ref:`1 ` * - DynamicQuantizeLSTM - :ref:`1 ` * - DynamicQuantizeMatMul - :ref:`1 ` * - EmbedLayerNormalization - :ref:`1 ` * - ExpandDims - :ref:`1 ` * - FastGelu - :ref:`1 ` * - FastGeluGrad - :ref:`1 ` * - FusedConv - :ref:`1 ` * - FusedGemm - :ref:`1 ` * - FusedMatMul - :ref:`1 ` * - GatherElementsGrad - :ref:`1 ` * - GatherGrad - :ref:`1 ` * - GatherND - :ref:`1 ` * - GatherNDGrad - :ref:`1 ` * - Gelu - :ref:`1 ` * - GeluGrad - :ref:`1 ` * - GemmFastGelu - :ref:`1 ` * - GistBinarizeDecoder - :ref:`1 ` * - GistBinarizeEncoder - :ref:`1 ` * - GistPack16Decoder - :ref:`1 ` * - GistPack16Encoder - :ref:`1 ` * - GistPack1Decoder - :ref:`1 ` * - GistPack1Encoder - :ref:`1 ` * - GistPack8Decoder - :ref:`1 ` * - GistPack8Encoder - :ref:`1 ` * - GistPackMsfp15Decoder - :ref:`1 ` * - GistPackMsfp15Encoder - :ref:`1 ` * - GreedySearch - :ref:`1 ` * - GridSample - :ref:`1 ` * - Group - :ref:`1 ` * - InPlaceAccumulator - :ref:`1 ` * - InPlaceAccumulatorV2 - :ref:`1 ` * - InplaceClipGradNorm - :ref:`1 ` * - Inverse - :ref:`1 ` * - InvertibleLayerNormalizationGrad - :ref:`1 ` * - Irfft - :ref:`1 ` * - IsAllFinite - :ref:`1 ` * - IsFinite - :ref:`1 ` * - LambOptimizer - :ref:`1 ` * - LayerNormalizationGrad - :ref:`1 ` * - LogSoftmaxGrad - :ref:`1 ` * - LogSoftmaxGrad_13 - :ref:`1 ` * - LongformerAttention - :ref:`1 ` * - MatMulInteger16 - :ref:`1 ` * - MatMulIntegerToFloat - :ref:`1 ` * - MaxpoolWithMask - :ref:`1 ` * - MegatronF - :ref:`1 ` * - MegatronG - :ref:`1 ` * - MixedPrecisionScale - :ref:`1 ` * - MulInteger - :ref:`1 ` * - MurmurHash3 - :ref:`1 ` * - NGramRepeatBlock - :ref:`1 ` * - NcclAllGather - :ref:`1 ` * - NcclAllReduce - :ref:`1 ` * - NcclReduceScatter - :ref:`1 ` * - NegativeLogLikelihoodLossInternal - :ref:`1 ` * - NegativeLogLikelihoodLossInternal2 - :ref:`1 ` * - NhwcConv - :ref:`1 ` * - NhwcMaxPool - :ref:`1 ` * - Pad - :ref:`1 ` * - PassThrough - :ref:`1 ` * - PythonOp - :ref:`1 ` * - PythonOpGrad - :ref:`1 ` * - QAttention - :ref:`1 ` * - QEmbedLayerNormalization - :ref:`1 ` * - QGemm - :ref:`1 ` * - QLinearAdd - :ref:`1 ` * - QLinearAveragePool - :ref:`1 ` * - QLinearConcat - :ref:`1 ` * - QLinearConv - :ref:`1 ` * - QLinearGlobalAveragePool - :ref:`1 ` * - QLinearLeakyRelu - :ref:`1 ` * - QLinearMul - :ref:`1 ` * - QLinearReduceMean - :ref:`1 ` * - QLinearSigmoid - :ref:`1 ` * - QLinearSoftmax - :ref:`1 ` * - QOrderedAttention - :ref:`1 ` * - QOrderedGelu - :ref:`1 ` * - QOrderedLayerNormalization - :ref:`1 ` * - QOrderedLongformerAttention - :ref:`1 ` * - QOrderedMatMul - :ref:`1 ` * - QuantizeBFP - :ref:`1 ` * - QuantizeLinear - :ref:`1 ` * - QuantizeWithOrder - :ref:`1 ` * - QuickGelu - :ref:`1 ` * - QuickGeluGrad - :ref:`1 ` * - Range - :ref:`1 ` * - RecordEvent - :ref:`1 ` * - Recv - :ref:`1 ` * - ReduceAllL2 - :ref:`1 ` * - ReduceSumInteger - :ref:`1 ` * - ReduceSumTraining - :ref:`1 ` * - ReluGrad - :ref:`1 ` * - RemovePadding - :ref:`1 ` * - RestorePadding - :ref:`1 ` * - Rfft - :ref:`1 ` * - SGDOptimizer - :ref:`1 ` * - SampleOp - :ref:`1 ` * - Scale - :ref:`1 ` * - Send - :ref:`1 ` * - SigmoidGrad - :ref:`1 ` * - SimplifiedLayerNormalizationGrad - :ref:`1 ` * - SkipLayerNormalization - :ref:`1 ` * - SliceGrad - :ref:`1 ` * - Snpe - :ref:`1 ` * - SoftmaxCrossEntropy - :ref:`1 ` * - SoftmaxCrossEntropyGrad - :ref:`1 ` * - SoftmaxCrossEntropyLossGrad - :ref:`1 ` * - SoftmaxCrossEntropyLossInternal - :ref:`1 ` * - SoftmaxCrossEntropyLossInternalGrad - :ref:`1 ` * - SoftmaxDropoutGrad - :ref:`1 ` * - SoftmaxGrad - :ref:`1 ` * - SoftmaxGrad_13 - :ref:`1 ` * - SparseToDenseMatMul - :ref:`1 ` * - SplitTraining - :ref:`1 ` * - SummaryHistogram - :ref:`1 ` * - SummaryMerge - :ref:`1 ` * - SummaryScalar - :ref:`1 ` * - SummaryText - :ref:`1 ` * - TanhGrad - :ref:`1 ` * - Tokenizer - :ref:`1 ` * - TorchEmbedding - :ref:`1 ` * - TransposeMatMul - :ref:`1 ` * - Trilu - :ref:`1 ` * - Unique - :ref:`1 ` * - View - :ref:`1 ` * - WaitEvent - :ref:`1 ` * - WordConvEmbedding - :ref:`1 ` * - YieldOp - :ref:`1 ` * - ZeroGradient - :ref:`1 `