Content:Courses Project

From Earlham CS Department
Jump to navigation Jump to search

The Courses Project (outdated)

The information on this project is outdated and is for the most part in need of rather serious revision. A direction for how to implement this has been chosen and work progresses steadily, it is at this point a very easy task, depending on a few external factors. In any event, use the below information only as historical reference. --Jon 09:34, 3 Apr 2006 (EST)

Since I'm (albeit slowly) approaching the programming phase I thought maybe an opened discourse would be useful about my plans for making the course list dynamic. For efficiency's sake there's really no need to continually be parsing what will be pretty much a static XML feed from WebDB. So what I thought was we'll throw a perlscript in the Courses sub-directory and execute it from crontab every semester or so... It can pull in the XML, apply the XSL transform, and generate the course list. Dynamic, but not wasteful, is what I'm aiming for. Thoughts?

Last updated Jon 23:56, 8 Mar 2006 (EST)

It might make more sense to put the perl script into the cgi-bin, just to help keep the file structure as clean and organized as possible... --Tom 16:51, 12 Mar 2006 (EST)

Well, theoretically I should be able to add to the cgi-bin CVS module, I'll throw it in there when I get back to Indiana. Somehow I lost my ability to tunnel through Quark with SSH to my desktop so the XML parser'll have to wait for implementation until after Spring Break.

We need to come up with a location for cache.html; the generated HTML course list which'll live as cache for the perlscript. We also need to deal with $recache; the variable which indicates after what period new content should be generated. And, finally, we need to determine whether it is necessary to generate an MD5 sum of the cache to check for validity; otherwise it'd be damned easy for somebody to inject an SSI into cache.html which calls God Knows What.

One thought I had was generating cache.html from the XML parser and then chucking it into the Content database, which if I recall was just brought into existence. We can throw it in along with an MD5 sum of the page and when it was cached, hell we can even create an archive kind of deal where we keep old caches and let people browse 'em for historical purposes or for predicting what courses could be offered in the future, somewhat like what's there now.

Jon 16:09, 21 Mar 2006 (EST)

Okay, Jim brought up in an e-mail that it would be more efficient to have the perlscript run at a specific time through I imagine a Crontab, instead of through an SSI. I think that I want to avoid Crontab so what I'm going to is suggest that we use the Perlscript we have but instead of SSI we run with AJAX. What I think should happen is that in the Courses page we have an iframe linked to the cached generated course content. That way the user at least sees something when he or she enters, and odds are it'll never be that far off. In the background, JavaScript should call the Perlscript and send as its GET variable the result of querying through the document object module cache.html's document.lastModified variable; then the Perlscript can check that against the current date and time, and then use that to determine whether to regenerate the cache. In any case, the Perlscript will send back either the word "refresh" or "norefresh" and that can be evaluated to determine whether to refresh the iframe.

AJAX allows us to offload the processing to the background; I think it should be considered instead of Crontabs. I'm going to start looking at the feasibility of implementing this, unless anybody has any objections to raise on the matter...

Jon 14:05, 25 Mar 2006 (EST)