I will really, really try to stop thinking about this after writing message :-)
Here is one possible rule one could imagine, but I think it would be pretty weirdly surprising in its behavior, if you implemented it.
"If the non-integer decimal literal can be represented exactly as a double, make it a double, otherwise make it an exact BigDecimal (which is always possible for any finite sequence of decimal digits)."
Consider all of the literals 0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0. Among those 11 literals, only 0.0, 0.5, and 1.0 can be represented exactly as a double, so according to the rule above, those 3 would be double, and the other 8 would be made BigDecimal.
Similarly, for the 101 literals with 2 digits after the decimal point from 0.00, 0.01, 0.02, etc. up to 1.00, only 5 of them can be represented exactly as double: 0.00, 0.25, 0.50, 0.75, and 1.00. According to the rule above, only those 5 literals would be type double, and the rest would all be BigDecimal.