View Issue Details
|ID||Project||Category||View Status||Date Submitted||Last Update|
|0000195||LDMud||LPC Compiler/Preprocessor||public||2004-11-26 22:34||2009-10-07 01:42|
|Summary||0000195: New datatype 'long'|
|Description||Short: New datatype 'long'|
From: Wessel Troost <firstname.lastname@example.org>
Another good option would be a long data type.
Believe it or not, I've seen over 2 muds that had
experience go over MAX_INT.
|Additional Information||Alternative: arbitrary long numbers (using one of the long-number libraries)|
|Tags||No tags attached.|
|External Data (URL)|
Implement multi-precision numbers (see GNU MP Library gmp) on LPC level.
Righ's version for FinalFrontier currently uses efuns.
I think, on LP64 architectures long will be obsolete. The question is, whether we should implement something along GMP. Currently, it seems to me a bit overkill and it will make the language more complex and I don't really have a specific need for MUD development.
What about you (dear readers... ;-)?
||I don't know whether we should support GMP, but if we do so, it should be transparent to the user. I.e. the driver should automatically switch from int or float to a GMP type. (Because LPC is an amateur language.)|
There is a need for integers > MAX_INT. I "implemented" sort of GMP support, but (as lars mentioned) it's been based on efuns, so it's not transparent to the user.
I don't think it would be an overkill to support GMP, even for *every* LPC integral or floating point type, since it actually is well optimized and really fast. Some measurements I made confirmed that.
That would help us to get rid of borders or incorrect floating point calculations with no or just a really minimal trade-off.
||Just to be sure: you have a need for integers > 2^63 / arbitrary large integers? Or a need for integers larger > 2^31?|
||I've got a need for > 2^31.|
Ok. ;-) That is something I also would like to have.
The question is, if we really need an additional type for that or if we should just fix the issues on LP64 platforms and then use the 64 bit wide long. (I regard a libGMP-type as additional, even if we would manage to implement it so that the user does not notice.)
Overkill wasn't necessarily targeted an runtime efficiency, but also at the work/time to implement it properly. Starting from the size of an svalue to the suggestion of the GMP developers that interpreters keep a free list or stack of initialized variables ready for use to save a lot of initialization/allocations/clearings. And then we might want to find ways to switch transparently between a standard C int and the mpz_t depending on the number the LPC int will have. Not to mention the less readable interface libGMP necessarily has for manipulating its entities (mpz_mul_ui (result, param, n); vs result=param*n;). We also have to take care of initializing and freeing all the GMP integers properly, e.g. before calling errorf().
I venture the idea, that it may be easier to switch to a long long in T_NUMBER on ILP32 platforms, if we just want to have a larger range.
> The question is, if we really need an additional type for that or if
> we should just fix the issues on LP64 platforms and then use the
> 64 bit wide long.
No, it isn't. You don't have LP64 platforms widely spread, yet.
> (I regard a libGMP-type as additional, even if we would manage to
> implement it so that the user does not notice.)
> Overkill wasn't necessarily targeted an runtime efficiency, but also
> at the work/time to implement it properly.
You have to decide: Do you want to minimize the work or do you want to add an *additional* type? :-)
I think the first thing to do would be to have a clear distinction between LPC integers and C integers internally to the driver. That is also needed for proper 64 bit long support. This should be the most tricky part.
> if we just want to have a larger range
That seems to be the first step and the "must". The second question is, why do we have to have limited data types at all (limited in width, limited in precision)? LPC is a runtime language, we don't really have to get the last processor ticks out of it.
I think once we had that clear distinction mentioned above, it should not be a real problem to do any of the solutions.
I thought a little more about the problem. We've got three different situations when handling numeric data types:
1) lib internal calculations, loading and storing numbers to/from objects
2) using numeric parameters with efun calls
3) driver internal
Point 3 code should avoid using the integral typedefs at all, but it often does not, because it is a convenient typedef if you want to make sure to have at least a 32 bit integer type to work with. This code is also the major problem if we wanted to use 64 bit integers.
In a perfect world, point 1 driver code would be encapsulated into its own module, and the only visible API for outside driver code would be two functions to convert C types from/into it.
Point 2 code (especially the efun code) uses the conversion function described above. That function could also do the runtime checking for driver internal needs, for instance to maximum array widths/string lengths and so on.
I think that we have to do *a lot of* the work decribed above, just to support 64 bit C types. Introducing a new, non-transparent integral type would be kind of unexplainable to the user and originated in the driver being unable to distinct between different numeric datatypes.
||Note Added: 0000220|
|2009-09-30 18:11||zesstra||Relationship added||related to 0000337|
|2009-10-05 18:27||zesstra||Note Added: 0001475|
|2009-10-05 18:27||zesstra||Status||new => feedback|
|2009-10-06 02:34||Gnomi||Note Added: 0001482|
|2009-10-06 04:29||Largo||Note Added: 0001491|
|2009-10-06 05:11||zesstra||Note Added: 0001495|
|2009-10-06 06:26||Largo||Note Added: 0001497|
|2009-10-06 19:13||zesstra||Note Added: 0001504|
|2009-10-07 00:19||Largo||Note Added: 0001505|
|2009-10-07 01:42||Largo||Note Added: 0001506|