Why Is A Numpy Int Not An Instance Of A Python Int, But A Numpy Float Is An Instance Of A Python Float?
Solution 1:
Python integers can be arbitrary length: type(10**1000) is still int, and will print out a one and then a thousand zeros on your screen if you output it.
Numpy int64 (which is what int_ is on my machine) are integers represented by 8 bytes (64 bits), and anything over that cannot be represented. For example, np.int_(10)**1000 will give you a wrong answer - but quickly ;).
Thus, they are different kinds of numbers; subclassing one under the other makes as much sense as subclassing int under float would, is what I assume numpy people thought. It is best to keep them separate, so that no-one is confused about the fact that it would be unwise to confuse them.
The split is done because arbitrary-size integers are slow, while numpy tries to speed up computation by sticking to machine-friendly types.
On the other hand, floating point is the standard IEEE floating point, both in Python and in numpy, supported out-of-the-box by our processors.
Solution 2:
Because numpy.int_() is actually 64-bit, and int can have an arbitrary size, it uses about 4 extra bytes for every 2^30 worth of bits you put in. int64 has constant size:
>>>import numpy as np>>>a = np.int_(0)>>>type(a)
<type 'numpy.int64'>
>>>b = 0>>>type(b)
<type 'int'>
Post a Comment for "Why Is A Numpy Int Not An Instance Of A Python Int, But A Numpy Float Is An Instance Of A Python Float?"