Oh. >12TF FP32; >24TF FP16; 49TO INT8; 97TO INT4.
XSX presumably doesn't have a Tensor (AI/inference) block as well as a BVH acceleration (RT) block. It's just extended packed maths on the FMA cores (which is why, when you sum ops, the dedicated matrix block on a Turing GPU is so much faster).
Phone SoCs bet on AI; consoles on RT; nVidia on both; AMD RDNA2???