There have been quite a few changes, fixes and enhancements to DelphiWebScript, available in the SVN version:
- class visibility is now enforced: private and protected are equivalent to Delphi’s strict private and strict protected. Other levels are public (members accessible to the whole script) and published (default visibility, to be used for external exposure, RPC, persistence, etc.)
- added support for class abstract and class sealed: an abstract class has to be subclassed before it can be instantiated, a sealed class can’t be subclassed.
- virtual methods are now based on a Virtual-Method Table, previously they were implemented in a way vaguely similar to Delphi’s “dynamic” methods. The new implementation is much faster, but at the cost of a (hopefully reasonable) memory overhead. VMTs are shared, and thus only use memory for classes that actually introduce or override a virtual method.
- fixes and improvements to the exception handling (ExceptObject now available).
- fixes to the circular reference garbage collector.
- fixes and improvements to the virtual class methods support, and properties based on class methods.
- class-less procedures and functions calls are now faster.
- partial inlining loop unrolling optimization for small statement blocks*.
- other misc. optimizations, improvements and fixes.
There is also a new JSON support unit, which isn’t currently used by DWS, but has been introduced for testing and investigations. The strict JSON parser is AFAICT about twice faster than the current “fastest” Delphi JSON parser, with still some room for improvements.
*: the impact of those seem to be highly CPU-dependent, f.i. on the “Mandelbrot” demo, the speedup is a few percentage points on an AMD Phenom, but about 40% on my Intel Core i5.
edit: to be more accurate, it brings the Intel processor up to the level of the AMD cpu, the code must have been hitting a weakness in the Core i5 branch predictor.
15 thoughts on “DWScript news: classes, exceptions, speedups”
Great news, 40% is A LOT, even if cpu dependent. Where I can find the json unit, and the 2x speed is related to witch library?
Super, can’t wait to test it, regarding AMD processors — these seem to be “unreasonably” slower and unstable than Intel.
In the past I had 2 AMD processors(a Duron — very good one, and a Sempron one very lame), now I am a proud owner of a i7(workstation) and a i3(laptop) which work flawlessly, for a developer I would recommend only Intel processors, but this is just my two cents :D.
Thank you very much for your work Eric!!
It’s in the SVN. The JSON samples were taken from json.org, for the benchmark, it was quite informal, more feedback welcome!
Hi again Eric. I simply loved your JSON decoder code, really neat. Any plans to make it available for older delphi versions or a separated library?
A couple of suggestions.
You could replace the “JSONImmediate” Variant with a some sort of variant record “JSONImmediateValue” to remove “Variants” dependency and overhead.
For bigger objects, let’s say bigger than NN, you could order the list and then on “IndexOfName” function perform a binary search.
There are other bits, but I’m bothering to much already and you probably already saw then all.
What “samples” are there on json.org ? I have my own JSON parser that I would be interested to benchmark against some alternatives.
In that particular case, the optimization merely brings the Core i5 up to the Phenom level, the code must have been hitting on a weakness of the Intel processor’s branch predictor.
So the speedup is more due to working around a “bug” in the processor than anything else.
There are currently no plans for older versions.
For objects with many properties, the plan is currently to introduce indexing at some point, but I’m leaning towards keeping the attributes as originally ordered for the moment (to keep a 1:1 relationship with the original JSON data), the indexing could thus use any strategy (binary search for smaller property counts, hashes for higher counts or dynamic usage cases, such as when building a JSON from scratch, etc.).
The current objects are meant to be kept lean & basic though, kind of a basic lego block, with higher level features meant to be introduced in other higher-level classes.
As to the variant overhead, alternatives involve either a record with overlaps (union), which could be troublesome down the road, or a class per datatype, for the moment, I chose to not choose, hence the variants 😉
Not sure how it can be reached from their main page though, I found the page via google.
Great, thanks for that.
Thanks AGAIN! A couple of those examples identified a couple of bugs in my implementation (heterogenous arrays were incorrectly prohibited and literal null caused problems in some cases).
I would be interested in comparing benchmark results, non-optimised results for my implementation are:
menu (small) 0.25ms
menu (large) 0.49ms
web app 1.07ms
Note these times are milliseconds, i.e. the glossary test completes over 3200 times per second.
Those times were obtained running a test process 500 times and averaging the process times for each run. Each test uses a file stream to parse the JSON from disc. i.e. faster times could be obtained by pre-loading the file content into a memory stream then timing the parsing of the memory stream.
Since we have different CPUs, absolute timings may not be very meaningful, so you might want to grab the unit from the SVN and plug it in you benchmark to compare on your machine.
That said, on my low-clocked Phenom, the web app case is at 0.048 ms from memory, and 0.1 ms from disk… In other words, it takes slightly more time to load the json from the windows file cache than it takes to parse it.
The current implementation of the JSON parser won’t work with the non Unicode versions of Delphi, for decoding \u parameters. It will decode only ASCII 7 bit, and could miss some 8 bit characters (like € or such).
That’s the limit of using string/char everywhere: it’s not only a problem of character size, but also a problem of charset encoding. That’s why I normally use a native UTF-8 parser and encoder for JSON data: it’s working great in all versions of Delphi, and uses less RAM most of the time. But http://www.ietf.org/rfc/rfc4627.txt states that the encoding storage can be whatever Unicode flavor, default is UTF-8.
Non-Unicode versions of Delphi are not supported indeed, and won’t be, as DWScript as a whole doesn’t and won’t support them either (generics are required, and even D2009 support is problematic because of the compiler bugs).
In addition, frankenstrings aren’t supported either (and won’t be), those are D2009+ “String” values whose content is not UTF-16.
UTF-8 would have had quite some advantages, but in D2009+, with UTF-8 strings, you’re really working against the RTL and compiler, and dataloss becomes a mere silent automagic cast away…
CPU diffs noted, tho my own test machine isn’t especially swift either – I shall get around to plugging your parser in to my SmokeTest harness for a proper comparison at some point. 🙂
Compatibility goals would be another difference. My JSON parser employs my own Unicode stream decorator to perform a WideChar-wise read from “ANSI”, UTF8 or UTF16 stream (the stream decorator also supports UTF32, but by relying on WideChar-wise read my JSON parser doesn’t itself accommodate UTF32 as I figure this is unlikely to crop up “in the wild”, and zero chance in JSON in my own apps) and this stream decorator is compatible with all versions of Delphi I have tested with (7, 2007, 2009+)
I have a problem when creating object instances.
edit: moved to issue #91
Comments are closed.