The LLVM Programmers Manual gives some brief idea of how llvm::cast works:
cast<>:The
cast<>operator is a “checked cast” operation. It converts a pointer or reference from a base class to a derived class, causing an assertion failure if it is not really an instance of the right type. This should be used in cases where you have some information that makes you believe that something is of the right type. An example of theisa<>andcast<>template is:static bool isLoopInvariant(const Value *V, const Loop *L) { if (isa<Constant>(V) || isa<Argument>(V) || isa<GlobalValue>(V)) return true; // Otherwise, it must be an instruction... return !L->contains(cast<Instruction>(V)->getParent()); }
It, however, doesn't specify any other uses beyond references or pointers. Moreover, there is a separate mlir::cast which, intuitively, should follow the same convention, but doesn't have any documentation provided. While developing an MLIR Pass, I came across the following challenge:
// op is mlir::tosa::TransposeOp
auto val = op.getOperand(0); // returns mlir::Value
if (val.getType().getShape().size() > 4) // compilation failure
...
The code fails to compile because mlir::Value::getType returns mlir::Type a base class, which doesn't provide any information on the tensor dimension. However, by definition of tosa.transpose I know the operand must be tensor. At the same time mlir::tosa::TransposeOp::getOperand(int) returns an mlir::Value (not a reference or a pointer), so it doesn't seem I can simply "cast" it to the derived type, however if I do as follows:
auto val = mlir::cast<mlir::TypedValue<mlir::RankedTensorType>>(op.getOperand(0));
if (val.getType().getShape().size() > 4)
...
Everything seems to work (it even returns the correct shape).
I struggle to comprehend how this casting could possibly work correctly. Does mlir::Value somehow magically provides required information to construct an mlir::TypedValue or i'm observing some kind of UB, which gives an expected output by chance?