[learn compiler from scratch] XVII. Summary of main points of MLIR ODS Part II

preface

This section is in [learn compiler from scratch] XVI. Summary of main points of MLIR ODS part I The key points of ODS are supplemented and completed on the basis of. The definition of constraints and attributes are very important elements in MLIR. As for the definition of types, I think I can understand them. We can study them carefully when we need to customize types. Finally, the syntax of MLIR is rather obscure. Beginners can use MLIR tblgen to assist in debug ging.

In these two articles, I followed the ODS specification of MLIR and summarized 14 key points. For each key point, I compared it in the Op definition of OneFlow MLIR, and gave some sample codes and locations. Hope to help readers get started with MLIR.

11. Constraints (this is important)

Constraint is a core concept in the definition of table driven Operation: Operation verification and graph Operation matching are based on constraints. Therefore, both the Operation definition and the rewrite rule are directly related to the write constraint. MLIR at opbase.td( https://github.com/llvm/llvm-project/blob/main/mlir/include/mlir/IR/OpBase.td )The constraint base class is defined in. The constraints of an Operation can cover different ranges, which may be:

  • Focus only on a single attribute (for example, a 32-bit integer greater than 5)
  • Multiple operands and results (for example, the shape of the first result must be the same as the first operand (which can be understood as Tensor))
  • The operation itself is inherent. (for example, if there are no side effects, refer to the case of transfer OP elimination)

We call them single entity constraints, multi-entity constraints and features respectively. Just understand the concepts here. I think it is most important to write new constraints.

  • Monomer constraints. The scope of a monomer constraint is a single operand, and the constraints of attributes or results are specified in the declaration position of the entity, such as Operation arguments and Operation results (in [learn compiler from scratch] XVI. Summary of main points of MLIR ODS part I Operation arguments and Operation results are summarized in.

  • Multi body constraints. Multi entity constraints in https://github.com/llvm/llvm-project/blob/main/mlir/include/mlir/IR/OpBase.td Is modeled as the predoptrain class (a subclass of optrain). See opbase.td for a complete list.

  • features. Features are the internal attributes of the Operation, such as whether it has side effects, whether it can be exchanged, whether it is a terminator, etc. These constraints should be specified as Op class template parameters, such as [learn compiler from scratch] XVI. Summary of main points of MLIR ODS part I Operation traits and constraints of Op in Section 3 of. Characteristic in https://github.com/llvm/llvm-project/blob/main/mlir/include/mlir/IR/OpBase.td Is modeled as a nativeoptrain class (a subclass of optrain). They are supported and will be translated into the corresponding C + + MLIR:: optrain classes.

  • How do I specify a new constraint? To write a new constraint, we must provide it with a predicate and specify a description name. Predicates modeled using the Pred class are at the core of the constraints. Constrained predicates are usually built in a nested manner. There are two types of predicates: 1.CPred: the original leaf node predicate. 2. Compound predicate: predicate composed of sub predicates Using Predicate combiner (junction: and, disconnection: or, Negation: neg, substitution: substleaves, concatenation: concat). CPred is the basis for more complex predicates. It is an "atomic" predicate from the perspective of TableGen and an "interface" between TableGen and C + +. It is already C + + code, which will be treated as an opaque string and replaced with a special placeholder. We can put any C + + code that returns Boolean values in CPred, including calculating expressions, calling functions, calling class methods, etc.

To facilitate interaction with the C + + environment, special placeholders are provided to refer to entities in the context in which the predicate is used. They act as "hooks" for closed environments. This includes$_ builder,$_ op and$_ self:

  • $_ The builder will be replaced with an instance of mlir::Builder so that we can access common build methods.
  • $_ op will be replaced by the current Operation so that we can access the information of the current Operation.
  • $_ Self is replaced by the entity to which the predicate is attached. For example, boolatr is an attribute constraint that contains cpred < "$_self. Isa < boolatr > ()" >. So for boolatr: $attr$_ Self will be replaced by $attr. For type constraints, it is a little special because we want the constraints defined by each type to be read naturally, and we want to attach the type constraints directly to the operands / results$_ Self will be replaced by the type of operand / result. For example, for F32 in F32:$operand, its$_ Self will be extended to operand(...).getType().

For example, to write an attribute, attr is an IntegerAttr. In C + +, we can call attr. Isa < IntegerAttr > (). This line of code can also be used as$_ Self. Isa < IntegerAttr > () is packaged in CPred, where$_ As a special placeholder, self is replaced by the current attribute attr during extension to achieve the same function (in Tablegen).

For more complex predicates, we can wrap them in a single CPred or use a predicate combiner to combine them. For example, to write out a constraint where the attribute attr is a 32-bit or 64 bit integer, write it as:

And<[
  CPred<"$_self.isa<IntegerAttr>()">,
  Or<[
    CPred<"$_self.cast<IntegerAttr>().getType().isInteger(32)">,
    CPred<"$_self.cast<IntegerAttr>().getType().isInteger(64)">
  ]>
]>

(note that the above is just a familiar example to show how to use CPred and predicate combiner to write complex predicates. Specifically, OpBase.td has defined I32Attr and I64Attr for integer attributes, so we can actually reuse them to write it or < [I32Attr. Predicate, I64Attr. Predicate] >.)

Here, we use an example of OneFlow to explain. We define an IsGPU constraint:

def IsGPU: Constraint<CPred<"$0.getValue().equals(\"gpu\")">, "is GPU device">;

Then OneFlow makes a customized optimization in the Transformer part, that is, the Scale and Tril continuous kernels are merged into a large kernel, which can save some memory reading and writing time. However, the merged kernel only takes effect in the case of GPU, so at this time, it is necessary to judge whether the device of the Scale and Tril operations detected in the current calculation diagram is g Pu requires this constraint. The implementation of the Pass of FusedScaleTrilPattern is as follows. You can see that the IsGPU constraint is used at the end.

def FusedScaleTrilPattern : Pat<
  (
    OneFlow_TrilOp
    (
      OneFlow_ScalarMulOp
        $x,
        $scale_op_name,
        $scale_trainable,
        $scale_device_tag,
        $scale_device_name,
        $scale_scope_symbol_id,
        $scale_hierarchy,
        $has_int_operand,
        $has_float_operand,
        $int_operand,
        $float_operand
    ),
    $tril_op_name,
    $tril_trainable,
    $tril_device_tag,
    $tril_device_name,
    $tril_scope_symbol_id,
    $tril_hierarchy,
    $diagonal,
    $floating_fill_value,
    $integer_fill_value,
    $is_floating_fill_value
  ),
  (OneFlow_FusedScaleTrilOp $x,
    $tril_op_name,
    $tril_trainable,
    $tril_device_tag,
    $tril_device_name,
    $tril_scope_symbol_id,
    $tril_hierarchy,
    $diagonal,
    $floating_fill_value,
    $integer_fill_value,
    $is_floating_fill_value,
    $float_operand,
    $int_operand,
    $has_float_operand
  ),
  [
    (IsGPU $tril_device_tag),
    (IsGPU $scale_device_tag)
  ]
>;

The function of this Pass is to detect continuous scale + tril operations and merge the two operations into a FusedScaleTril Operation.

If it is very complex to write predicates with CPred and predicate combiner, we can also write them as ordinary C + + functions and use CPred as a way to "call" functions. For example, to verify whether the attribute attr has some attributes, we can write a C + + function, such as:

bool HasSomeProperty(Attribute attr) { ... }

Then define Op as follows:

def HasSomeProperty : AttrConstraint<CPred<"HasSomeProperty($_self)">,
                                     "has some property">;

def MyOp : Op<...> {
  let arguments = (ins
    ...
    HasSomeProperty:$attr
  );
}

There is no clear standard as to whether we should use a single CPred to wrap the entire expression, multiple CPreds with predicate combiners, or a single CPred to "call" a function to define predicates. Using CPred and predicate combiner for definition is desirable because it exposes more information (rather than hiding all the logic behind C + + functions) to the operation definition specification so that it can potentially drive more automatically generated cases. However, it needs a good general predicate thesaurus as a building block to avoid repetition, which is currently being studied.

12. Attribute definition (very important + 1)

Property is a constant of Operation known at compile time. ODS provides attribute wrappers on C + + attribute classes. Some common C + + attribute classes are defined in the core IR Library of MLIR( https://github.com/llvm/llvm-project/blob/main/mlir/include/mlir/IR/Attributes.h ). ODS allows these attributes to be used in TableGen to define operations, which may have more fine-grained constraints. For example, StrAttr is directly mapped to StringAttr; F32Attr/F64Attr require FloatAttr to have an additional bit width. The ODS attribute is defined as a storage type (corresponding to the mlir::Attribute class of the storage attribute), a return type (corresponding to the C + + return type of the generated getters help function), and a method for exchanging the internal storage type and the help function.

Property decorator. There are some important attribute adapters / Decorators / modifiers that can be applied to ODS attributes to specify common additional attributes, such as selectability, default value, etc.

  • Defaultvaluedatr: specifies a default value for an attribute.
  • Optionalatr: specifies an attribute as optional.
  • Condensed: condensed is provided as a general mechanism to help further model the attribute constraints brought by value types. More primitive constraints can be combined into complex constraints through condensed. For example, the minimum value of a 32bit integer is 10, which can be expressed as defined < i32attr, [intminvalue < 10 >] >. There are other examples, such as intminvalue < n >: specifying an integer attribute greater than or equal to N, and so on.

Enumeration properties. Some properties can only get values from predefined enums, such as the comparison type of the comparison op. To define these attributes, ODS provides several mechanisms: stranumattr, IntEnumAttr, and BitEnumAttr.

  • Stranumattr: each enum case is a string, and the attribute is stored as StringAttr in op.
  • IntEnumAttr: each enum case is an integer, and the attribute is stored as IntegerType in op.
  • BitEnumAttr: each enum case is a bit, and the attribute is stored as IntegerAttr in op.

All these * EnumAttr attributes need to fully specify all allowed conditions through their corresponding * EnumAttrCase. With this, ODS can generate additional validation to accept only allowed cases. To facilitate interaction between * EnumAttrs and their c + + users, enumsgen( https://github.com/llvm/llvm-project/blob/main/mlir/tools/mlir-tblgen/EnumsGen.cpp )Tablegen backend can generate some common utilities: C + + enumeration classes, llvm::DenseMapInfo for enumeration classes, and conversion functions from / to string. This is controlled by the - gen enum decls and - gen enum defs command line options of MLIR tblgen.

For example, given the following EnumAttr:

def Case15: I32EnumAttrCase<"Case15", 15>;
def Case20: I32EnumAttrCase<"Case20", 20>;

def MyIntEnum: I32EnumAttr<"MyIntEnum", "An example int enum",
                           [Case15, Case20]> {
  let cppNamespace = "Outer::Inner";
  let stringToSymbolFnName = "ConvertToEnum";
  let symbolToStringFnName = "ConvertToString";
}

The following code will be generated through MLIR tblgen - gen enum decls:

namespace Outer {
namespace Inner {
// An example int enum
enum class MyIntEnum : uint32_t {
  Case15 = 15,
  Case20 = 20,
};

llvm::Optional<MyIntEnum> symbolizeMyIntEnum(uint32_t);
llvm::StringRef ConvertToString(MyIntEnum);
llvm::Optional<MyIntEnum> ConvertToEnum(llvm::StringRef);
inline constexpr unsigned getMaxEnumValForMyIntEnum() {
  return 20;
}

} // namespace Inner
} // namespace Outer

namespace llvm {
template<> struct DenseMapInfo<Outer::Inner::MyIntEnum> {
  using StorageInfo = llvm::DenseMapInfo<uint32_t>;

  static inline Outer::Inner::MyIntEnum getEmptyKey() {
    return static_cast<Outer::Inner::MyIntEnum>(StorageInfo::getEmptyKey());
  }

  static inline Outer::Inner::MyIntEnum getTombstoneKey() {
    return static_cast<Outer::Inner::MyIntEnum>(StorageInfo::getTombstoneKey());
  }

  static unsigned getHashValue(const Outer::Inner::MyIntEnum &val) {
    return StorageInfo::getHashValue(static_cast<uint32_t>(val));
  }

  static bool isEqual(const Outer::Inner::MyIntEnum &lhs, const Outer::Inner::MyIntEnum &rhs) {
    return lhs == rhs;
  }
};
}

The following code will be generated from MLIR tblgen - gen enum defs:

namespace Outer {
namespace Inner {
llvm::StringRef ConvertToString(MyIntEnum val) {
  switch (val) {
    case MyIntEnum::Case15: return "Case15";
    case MyIntEnum::Case20: return "Case20";
  }
  return "";
}

llvm::Optional<MyIntEnum> ConvertToEnum(llvm::StringRef str) {
  return llvm::StringSwitch<llvm::Optional<MyIntEnum>>(str)
      .Case("Case15", MyIntEnum::Case15)
      .Case("Case20", MyIntEnum::Case20)
      .Default(llvm::None);
}
llvm::Optional<MyIntEnum> symbolizeMyIntEnum(uint32_t value) {
  switch (value) {
  case 15: return MyIntEnum::Case15;
  case 20: return MyIntEnum::Case20;
  default: return llvm::None;
  }
}

} // namespace Inner
} // namespace Outer

The following BitEnumAttr definitions are similar:

def None: BitEnumAttrCase<"None", 0x0000>;
def Bit1: BitEnumAttrCase<"Bit1", 0x0001>;
def Bit2: BitEnumAttrCase<"Bit2", 0x0002>;
def Bit3: BitEnumAttrCase<"Bit3", 0x0004>;

def MyBitEnum: BitEnumAttr<"MyBitEnum", "An example bit enum",
                           [None, Bit1, Bit2, Bit3]>;

We get:

// An example bit enum
enum class MyBitEnum : uint32_t {
  None = 0,
  Bit1 = 1,
  Bit2 = 2,
  Bit3 = 4,
};

llvm::Optional<MyBitEnum> symbolizeMyBitEnum(uint32_t);
std::string stringifyMyBitEnum(MyBitEnum);
llvm::Optional<MyBitEnum> symbolizeMyBitEnum(llvm::StringRef);
inline MyBitEnum operator|(MyBitEnum lhs, MyBitEnum rhs) {
  return static_cast<MyBitEnum>(static_cast<uint32_t>(lhs) | static_cast<uint32_t>(rhs));
}
inline MyBitEnum operator&(MyBitEnum lhs, MyBitEnum rhs) {
  return static_cast<MyBitEnum>(static_cast<uint32_t>(lhs) & static_cast<uint32_t>(rhs));
}
inline bool bitEnumContains(MyBitEnum bits, MyBitEnum bit) {
  return (static_cast<uint32_t>(bits) & static_cast<uint32_t>(bit)) != 0;
}

namespace llvm {
template<> struct DenseMapInfo<::MyBitEnum> {
  using StorageInfo = llvm::DenseMapInfo<uint32_t>;

  static inline ::MyBitEnum getEmptyKey() {
    return static_cast<::MyBitEnum>(StorageInfo::getEmptyKey());
  }

  static inline ::MyBitEnum getTombstoneKey() {
    return static_cast<::MyBitEnum>(StorageInfo::getTombstoneKey());
  }

  static unsigned getHashValue(const ::MyBitEnum &val) {
    return StorageInfo::getHashValue(static_cast<uint32_t>(val));
  }

  static bool isEqual(const ::MyBitEnum &lhs, const ::MyBitEnum &rhs) {
    return lhs == rhs;
  }
};
std::string stringifyMyBitEnum(MyBitEnum symbol) {
  auto val = static_cast<uint32_t>(symbol);
  // Special case for all bits unset.
  if (val == 0) return "None";

  llvm::SmallVector<llvm::StringRef, 2> strs;
  if (1u & val) { strs.push_back("Bit1"); val &= ~1u; }
  if (2u & val) { strs.push_back("Bit2"); val &= ~2u; }
  if (4u & val) { strs.push_back("Bit3"); val &= ~4u; }

  if (val) return "";
  return llvm::join(strs, "|");
}

llvm::Optional<MyBitEnum> symbolizeMyBitEnum(llvm::StringRef str) {
  // Special case for all bits unset.
  if (str == "None") return MyBitEnum::None;

  llvm::SmallVector<llvm::StringRef, 2> symbols;
  str.split(symbols, "|");

  uint32_t val = 0;
  for (auto symbol : symbols) {
    auto bit = llvm::StringSwitch<llvm::Optional<uint32_t>>(symbol)
      .Case("Bit1", 1)
      .Case("Bit2", 2)
      .Case("Bit3", 4)
      .Default(llvm::None);
    if (bit) { val |= *bit; } else { return llvm::None; }
  }
  return static_cast<MyBitEnum>(val);
}

llvm::Optional<MyBitEnum> symbolizeMyBitEnum(uint32_t value) {
  // Special case for all bits unset.
  if (value == 0) return MyBitEnum::None;

  if (value & ~(1u | 2u | 4u)) return llvm::None;
  return static_cast<MyBitEnum>(value);
}

In OneFlow MLIR, enumeration properties are also defined to process various data types of OneFlow. The code is as follows:

#ifndef ONEFLOW_ENUMS
#define ONEFLOW_ENUMS

def OneFlow_InvalidDataType : I32EnumAttrCase<"DT_InvalidDataType", 0>;
def OneFlow_Char : I32EnumAttrCase<"DT_Char", 1>;
def OneFlow_Float : I32EnumAttrCase<"DT_Float", 2>;
def OneFlow_Double : I32EnumAttrCase<"DT_Double", 3>;
def OneFlow_Int8 : I32EnumAttrCase<"DT_Int8", 4>;
def OneFlow_Int32 : I32EnumAttrCase<"DT_Int32", 5>;
def OneFlow_Int64 : I32EnumAttrCase<"DT_Int64", 6>;
def OneFlow_UInt8 : I32EnumAttrCase<"DT_UInt8", 7>;
def OneFlow_OFRecord : I32EnumAttrCase<"DT_OFRecord", 8>;
def OneFlow_Float16 : I32EnumAttrCase<"DT_Float16", 9>;
def OneFlow_TensorBuffer: I32EnumAttrCase<"DT_TensorBuffer", 10>;

def OneFlow_DataType: I32EnumAttr<"DataType", "OneFlow Data Type enum",
  [
    OneFlow_InvalidDataType,
    OneFlow_Char,
    OneFlow_Float,
    OneFlow_Double,
    OneFlow_Int8,
    OneFlow_Int32,
    OneFlow_Int64,
    OneFlow_UInt8,
    OneFlow_OFRecord,
    OneFlow_Float16,
    OneFlow_TensorBuffer,
  ]
> {
  let cppNamespace = "::mlir::oneflow";
  let stringToSymbolFnName = "ConvertToEnum";
  let symbolToStringFnName = "ConvertToString";
}

#endif // ONEFLOW_ENUMS

We can observe the enum attribute declaration generated by it:

/*===- TableGen'erated file -------------------------------------*- C++ -*-===*\
|*                                                                            *|
|* Enum Utility Declarations                                                  *|
|*                                                                            *|
|* Automatically generated file, do not edit!                                 *|
|*                                                                            *|
\*===----------------------------------------------------------------------===*/

namespace mlir {
namespace oneflow {
// OneFlow Data Type enum
enum class DataType : uint32_t {
  DT_InvalidDataType = 0,
  DT_Char = 1,
  DT_Float = 2,
  DT_Double = 3,
  DT_Int8 = 4,
  DT_Int32 = 5,
  DT_Int64 = 6,
  DT_UInt8 = 7,
  DT_OFRecord = 8,
  DT_Float16 = 9,
  DT_TensorBuffer = 10,
};

::llvm::Optional<DataType> symbolizeDataType(uint32_t);
::llvm::StringRef ConvertToString(DataType);
::llvm::Optional<DataType> ConvertToEnum(::llvm::StringRef);
inline constexpr unsigned getMaxEnumValForDataType() {
  return 10;
}


inline ::llvm::StringRef stringifyEnum(DataType enumValue) {
  return ConvertToString(enumValue);
}

template <typename EnumType>
::llvm::Optional<EnumType> symbolizeEnum(::llvm::StringRef);

template <>
inline ::llvm::Optional<DataType> symbolizeEnum<DataType>(::llvm::StringRef str) {
  return ConvertToEnum(str);
}

class DataTypeAttr : public ::mlir::IntegerAttr {
public:
  using ValueType = DataType;
  using ::mlir::IntegerAttr::IntegerAttr;
  static bool classof(::mlir::Attribute attr);
  static DataTypeAttr get(::mlir::MLIRContext *context, DataType val);
  DataType getValue() const;
};
} // namespace oneflow
} // namespace mlir

namespace llvm {
template<> struct DenseMapInfo<::mlir::oneflow::DataType> {
  using StorageInfo = ::llvm::DenseMapInfo<uint32_t>;

  static inline ::mlir::oneflow::DataType getEmptyKey() {
    return static_cast<::mlir::oneflow::DataType>(StorageInfo::getEmptyKey());
  }

  static inline ::mlir::oneflow::DataType getTombstoneKey() {
    return static_cast<::mlir::oneflow::DataType>(StorageInfo::getTombstoneKey());
  }

  static unsigned getHashValue(const ::mlir::oneflow::DataType &val) {
    return StorageInfo::getHashValue(static_cast<uint32_t>(val));
  }

  static bool isEqual(const ::mlir::oneflow::DataType &lhs, const ::mlir::oneflow::DataType &rhs) {
    return lhs == rhs;
  }
};
}

The implementation part is not pasted. Here is too long code.

13. Type definition (I just have a brief understanding)

MLIR defines a TypeDef class hierarchy to support the generation of data types according to its specifications. The type is defined by the specialized TypeDef class, which has the concrete contents of all fields it needs. For example, the integer type can be defined as:

// All of the types will extend this class.
class Test_Type<string name> : TypeDef<Test_Dialect, name> { }

// An alternate int type.
def IntegerType : Test_Type<"TestInteger"> {
  let mnemonic = "int";

  let summary = "An integer type with special semantics";

  let description = [{
    An alternate integer type. This type differentiates itself from the
    standard integer type by not having a SignednessSemantics parameter, just
    a width.
  }];

  let parameters = (ins "unsigned":$width);

  // We define the printer inline.
  let printer = [{
    $_printer << "int<" << getImpl()->width << ">";
  }];

  // The parser is defined here also.
  let parser = [{
    if ($_parser.parseLess())
      return Type();
    int width;
    if ($_parser.parseInteger(width))
      return Type();
    if ($_parser.parseGreater())
      return Type();
    return get($_ctxt, width);
  }];
}
  • Type name: the name of the generated C + + class is < classparamname > type by default (for example, TestIntegerType in the above example). This can be overridden by the cppClassName field. mnemonic is the asm name that specifies the resolution. It is optional, and not specifying will mean that no parser or print method is attached to this class.

  • Type documentation: there are summary and description fields, which are used in the same way as in Operation. That is, the summary should be single line and the description should be a longer explanation.

  • Type parameters: the parameters field is a list of type parameters. If no parameters are specified (default), this type is considered a singleton type. The parameter adopts "c++Type": $paramName format. To use the c++Type as the parameter to be allocated in the storage constructor, there are two options: 1. Set hasCustomStorageConstructor to generate the TypeStorage class with the constructor just declared -- there is no definition -- so we can write it ourselves. 2. Use TypeParameter tablegen class instead of "c++Type" string. (I don't quite understand the second half of the sentence, and I haven't used it yet.)

  • TypeParameter tablegen class: This is used to further specify properties about each type parameter. It includes documents (summary and syntax), the C + + type to use, the custom allocator to use in the storage constructor method, and a custom comparator to determine whether two instances of the parameter type are equal.

// DO NOT DO THIS!
let parameters = (ins "ArrayRef<int>":$dims);

The default storage constructor blindly copies fields by value. It knows nothing about types. In this case, ArrayRef needs to be allocated using dims = allocator.copyInto(dims).

class ArrayRefIntParam :
    TypeParameter<"::llvm::ArrayRef<int>", "Array of ints"> {
  let allocator = "$_dst = $_allocator.copyInto($_self);";
}

...

let parameters = (ins ArrayRefIntParam:$dims);

allocator code block consists of$_ allocator (is the TypeStorageAllocator in which the object is allocated) and$_ dst (is the variable that places the allocated data). The comparator code block consists of$_ lhs and$_ The rhs parameter type consists of instances.

There are still a lot of customized types, but I don't have any requirements in this regard, so I don't continue to read them. Here's just a brief understanding. Interested readers can view the document for in-depth research: https://mlir.llvm.org/docs/OpDefinitions/ .

14. DEBUG method

Use MLIR tblgen to see the resulting text. TableGen syntax can sometimes be obscure. Reading the generated text is very useful for understanding and debugging problems. To build MLIR tblgen, run cmake --build-- Target MLIR tblgen is in our build directory and the MLIR tblgen binary is found in the bin / subdirectory. All supported generators can be found through MLIR tblgen -- help.

To view the generated code, provide the include path through - I and call a specific generator using MLIR tblgen. For example:

# To see op C++ class declaration
mlir-tblgen --gen-op-decls -I /path/to/mlir/include /path/to/input/td/file
# To see op C++ class definition
mlir-tblgen --gen-op-defs -I /path/to/mlir/include /path/to/input/td/file
# To see op documentation
mlir-tblgen --gen-dialect-doc -I /path/to/mlir/include /path/to/input/td/file

# To see op interface C++ class declaration
mlir-tblgen --gen-op-interface-decls -I /path/to/mlir/include /path/to/input/td/file
# To see op interface C++ class definition
mlir-tblgen --gen-op-interface-defs -I /path/to/mlir/include /path/to/input/td/file
# To see op interface documentation
mlir-tblgen --gen-op-interface-doc -I /path/to/mlir/include /path/to/input/td/file

15. Summary

This section is in [learn compiler from scratch] XVI. Summary of main points of MLIR ODS part I The key points of ODS are supplemented and completed on the basis of. The definition of constraints and attributes are very important elements in MLIR. As for the definition of types, I think I can understand them. We can study them carefully when we need to customize types. Finally, the syntax of MLIR is rather obscure. Beginners can use MLIR tblgen to assist in debug ging.

In these two articles, I followed the ODS specification of MLIR and summarized 14 key points. For each key point, I compared it in the Op definition of OneFlow MLIR, and gave some sample codes and locations. Hope to help readers get started with MLIR.

Tags: AI Computer Vision Deep Learning

Posted on Mon, 29 Nov 2021 13:04:19 -0500 by reece_1989