When reading into internal floating point values, normalization has no effect. When reading integers without normalization, each image is scaled to full range. With normalization they are scaled to the specified range and slices can be compared.
When the input file is missing either
MIvalid_range, the routines try to provide sensible
defaults, but funny things can still happen. The biggest problem is
the absence of
MIvalid_range if the defaults are not correct
(full range for integer values and [0,1] for floating point). When
converting floating point values to an integer type, there will be
overflows if values are outside the range [0,1].