Describe what you want to accomplish.
Pxscene is a platform independent drawing platform / app that will be used on a wide variety of embedded devices for the Hercules client. You can find more information here, and you can download the app to be used for testing: http://www.pxscene.org/
The code for pxCore / pxscene is available in Github here:
Build instructions can be found here: https://github.com/pxscene/pxCore/blob/master/examples/pxScene2d/README.md
Step-by-step instructions to build the existing duktape integration on Ubuntu 16.04 can be found here:
-DUSE_DUKTAPE is deprecated (not used).... Instead, we added two new cmake switches SUPPORT_NODE and SUPPORT_DUKTAPE. Both are enabled by default. The app now looks for a file called .sparkUseDuktape in the home directory to enable duktape.
You can validate the engine used through the "about.js" example (just type "about.js" in the address bar of pxScene). It will list the engine - either Node or duktape.
For this challenge, we are going to expand that investigation to try some of Duktape's features for memory reduction and see:
* What the effect is on memory usage
* What the effect is on pxscene functionality
There are a number of different options documented here:
For the profiling documentation, we want to run the fancy.js example and capture memory profile information. You can also test with a "real" application here:
That URL should load into pxscene and show a keyboard browseable UI that we are building.
For the config flags, we want to know:
* If def'ing or un-def'ing a flag helps with memory usage
* If def'ing or un-def'ing a flag breaks pxscene
You can find the target branch here: https://github.com/pxscene/pxCore/tree/_duktape2
A document describing each memory flag and the impact on pxscene, both from a memory standpoint and a usability standpoint. Note that the more information you provide here the better.
You should also describe, in detail, how the flags were applied and tested. The reviewers will need to reproduce your results, so your documentation needs to be clear on how you tested and determined the output.
For review, we want reviewers to double check the results seen above to make sure they make sense and are reproducible.