I do kinda agree with NBD but it all depends on how they sourced the figure, and if it's stated. FLOPS doesn't have to be a theoretical number, you can of course find it experimentally(Run a set number of floating point ops and divide that by the time taken in seconds to complete), but even then the actual or average number will usually vary significantly depending on workload, vector size, ect, you can find an average from these or use an averaged workload, but this is probably a theoretical max quoted here tbf.
But even then, the theoretical max for what precision? The traditional assumption would be FP32, but if they were being crafty, this could be an FP16 number, a 16-bit Floating point op is still a FLOP tbf, and the Xbone didn't have "rapid packed math" (Double rate FP16).