17. Why is there always some degree of error in floating-point arithmetic when performed by a binary digital computer?
when a floating point arithmetic is performed by a binary dgital computer we face some degree of errors because the digital computer will round of the floating point numbers with biggest fraction part to its nearer value by using ceiling value or floor value, instead of presenting its original value. here the missing values will generate the error .
Example : divide 5/3 = 1.66666666667 if we enter this in a digital computer, and if that computer uses 6 digits for the fractional part and rounded with ceiling value then the answer will become 1.666667 here the missing original result will become the error