View Issue Details

IDProjectCategoryView StatusLast Update
0000667LDMudRuntimepublic2009-10-27 13:14
Reporter_xtian_ Assigned To 
PrioritynormalSeverityminorReproducibilityalways
Status newResolutionopen 
Summary0000667: structs: compiled in access costs as many evals as runtime lookup
DescriptionIf s is a struct with element int i:

both
  s->i; // compiled in access
and
  s->("i"); // computed at runtime
have the same evalcost (approx 30 evals)

This is not as expected, compiled-in access should be faster and cheaper.
TagsNo tags attached.
External Data (URL)

Activities

Gnomi

2009-07-28 00:42

manager   ~0001242

Struct indexing costs only 4 eval ticks (putting the struct on the stack, putting the index on the stack, putting the recognized structure type on the stack and performing the index operation).

This can't be made much faster. And I don't think we should make indexing per string more expensive, as mapping indexing using strings costs only 3 ticks.

_xtian_

2009-07-30 08:05

reporter   ~0001244

Yes, it seems my measurements were off. I get approx 5 eval ticks for both kinds of accessing now. Thank you for looking it up.

... one of the drawbacks to using an interpreter. One merit less for using structs, then, since in theory their compiled-in accesses should have been (noticeably) faster/cheaper than using mappings and mapping-indexing.

zesstra

2009-09-04 11:40

administrator   ~0001253

Well... Ticks are one thing, but just as important is runtime. I don't have any data on that and haven't used structs much. Does anybody know how that compares?
BTW: I think, the biggest advantage of structs is that the members are properly typed, while values in a mapping are completely anonymous.

zesstra

2009-10-27 04:52

administrator   ~0001556

Concerning my last note: I checked 100000 indexing operations 5 times with a struct with one element and a mapping with one element:
t->i: Time: 73074 +- 53us
t->"i": Time: 73143 +- 160 us
t->("i"): Time: 81036 +- 754 us
t["i"]: Time: 75890 +- 1749 us
(The number for +- are the standard deviations of the 5 measurements.)

Indexing a struct by t->i should be faster than indexing mappings (besides being typed), because it is basically indexing a vector/array. The difference should be bigger for larger structs/mappings...
BTW, I imagine when indexing big structs by strings, that will quickly be less favourable, because finding the member increases linearly with struct size.

BTW2: the compiler seems to optimize t->"i" to t->i. Couldn't it do the same for t->("i") ("i" is still a constant, after all)?

I agree with Gnomi, t->i can't be made much faster without removing type checks, which seem to have the largest share in execution time here (indexing itself is basically a vector indexing operation in C).
String indexing for structs might be made more efficient for large structs, but I don't now if it is worth the effort... Out of curiosity: How large are your structs Xtian?

_xtian_

2009-10-27 12:39

reporter   ~0001564

Hm, I hadn't even really thought about scalability. I would say none of our structs are larger than 10-20 elements.
For me the interest lay more in the simple overhead of the operation - what could be gained when we have several hundred (different) struct-queries per execution.

BTW: what is the rationale behind translating systime into eval-costs? Is there some sort of guideline? Is it even linear?

zesstra

2009-10-27 13:14

administrator   ~0001566

Ok, I guess, 10-20 is not too bad. ;-) Might be interesting to measure it once for comparison. You may order the members according to use frequency, because it is a linear search.

There is only one way of translating eval-costs to execution time: measuring it. The relation is certainly not linear.
The interpreter just increases the evalcost by one for every operation (or maybe better: loop in eval_instruction()). If dynamic evalcosts are enabled, for expensive efuns or efuns dealing with potentially large amount of data there are some additional costs (e.g. 5 ticks per iteration in md5(), 1/1000 chars of string length etc.). But the dynamic evalcosts are not consistently used.
So, if you are very unlucky: an operation may cost 1 tick, but lets the driver stall for hundreds of milliseconds (or even more), because the driver waits for the harddisk.

Issue History

Date Modified Username Field Change
2009-07-27 23:35 _xtian_ New Issue
2009-07-28 00:42 Gnomi Note Added: 0001242
2009-07-30 08:05 _xtian_ Note Added: 0001244
2009-09-04 11:40 zesstra Note Added: 0001253
2009-10-27 04:52 zesstra Note Added: 0001556
2009-10-27 12:39 _xtian_ Note Added: 0001564
2009-10-27 13:14 zesstra Note Added: 0001566