I was writing a program which takes an input in dollars from the user and converts it to int which are the cents. The user will always enter either an int or a floating point number with a maximum of 2 decimal places. I want to convert it to int by multiplying by 100. However the program doesn't work for some of the numbers.
int cents = (dollars*100);
dollars
is the floating point input that the user gives. For example, If I dollars
= 4.2, cents
becomes 419.999. How can I correct this problem?
Simple adjust the value like this:
int cents = (int)(dollars*100 + 0.5);
Collected from the Internet
Please contact [email protected] to delete if infringement.
Comments