Sunday 27 December 2009

Webdesign and Supercompilation

With supercompilation being a long-forgotten technique invented decades ago, and with “webdesign” term being usurped by graphic artists & HCI experts, I doubt this post will be anything close to popular, but as always, it will not stop me from expressing my opinion. But let’s take it slowly now.

Let us assume the “web design” in a good, broad sense now: not just the omnipresent “logo on the right vs logo on the left” & “10 tips to get more clicks”. Just as software design comprises multiple heterogeneous activities concerning the making of a piece of software, just as language design is about how to create a good language suited for the target domain, web design is in general about how to make a web site, a web service or a web app well.

Super-compilation is a program transformation method of aggressive optimisation: it refactors the code based on the most possible assumptions, throwing away all dead code, unused options and inactivated functionalities. If was irrelevant or at least unproductive during the structured programming epoch, but the results of super-compilation were promising before that and remain promising in our time, during the epoch of multi-purpose factory factory frameworks.

The current (at least since 1999) trend in web design is dynamics and more dynamics. The content and its presentation is separated, and most of the time what the end-user sees is what is being generated from the actual content stored somewhere in a database by using the representation rules expressed in anything from XSL to AJAX (in software we would call such process “pretty-printing”). However, this is necessary only for truly dynamic applications such as Google Wave. In most of the other rich internet applications the content is being accessed (pretty-printed) much more often than being changed. When the super-compilation philosophy is applied here, we quickly see that it is possible to store the pre-generated data ready for immediate end-user demonstration. If the dependencies are known, one can easily design an infrastructure that would respond to any change of data with re-generation of all the visible data that depend on it. And that is the way it can and should be — I’m running several websites, ranging from my personal page to a popular contest syndication facility, all completed with this approach: the end-user always sees the statically generated solid XHTML, which is being updated on the server whenever the need arises, be it once a minute or once a month. Being static is not necessarily a bad thing, especially if you provide the end-user with all the expected buttons and links. Saves time and computational effort on all the on-the-fly processing requests.

When will it not work: for web apps that are essentially front-ends for a volatile database access; for web apps that are truly dynamic in nature; for web apps where user preferences are inexpressible in CSS & access rights. When will it work: pretty much everywhere else. Think about it. Have fun.

3 comments:

  1. You are saying "compile the data to a static page when the data is created", instead of "interpret the data when the page is viewed".

    In the true spirit of partial evaluation, you can derive your compiler from your interpreter. Is that what you are doing? Do you actually use partial evaluation to create your static web pages? How are you doing the dependency analysis? Or is it more a way of thinking about the design of your (web) program? Isn't this (just) caching? With caching you can design generation of your static pages truly lazily, i.e., only if there is at least one view.

    And I also don't understand the motivation of the approach. Is it to improve the performance of the website, or is it a different (easier) way to write the web program?

    ReplyDelete
  2. The motivation is: (1) to reduce the requirements on the client by reducing the amount of evaluation happening on their side, (2) to increase the choice of implementation language (we are no more limited by “web-enabled” frameworks), (3) to improve program performance, (4) to simplify the analysis/testing of the ready application (i.e. broken links in statically generated content vs broken links that may or may not be generated by a pretty-printer).

    What I’m doing in practice is closer to caching than to partial evaluation since the latter still poses some limitations on the client technology (but it’s totally possible and feasible for a number of scenarios I can think of). You are right in guessing that the topic of this post is not program transformation as such, but rather a way to create a web app: design the basic data structures, design the views on that data, implement those views as transformations, deploy the transformation results, maintain by rerunning the transformations according to dependencies.

    ReplyDelete
  3. (1) It is entirely possible to create a static RESTful HTML view that requires no computation on the client with a dynamic web application.

    (2) Separating the HTTP request interaction (and leaving that to a standard server) from page generation is clearly a reduction in program complexity. But where the site/app should be interactive, i.e. respond to form input, you cannot escape this interaction (form unpacking, authentication, sessions). To ensure that the right pages are re-generated you potentially increase complexity.

    (3) Performance is typically the only real motivation for partial evaluation. It would be interesting if it would be feasible to apply automatically to a web app (e.g. in WebDSL).

    (4) With WebDSL we argue that internal links in an application are checked automatically by the typechecker.

    (Hmm, it is interesting that I find myself on the interpreter side of this debate for a change.)

    ReplyDelete