Over the weekend I managed to complete the BerkeleyX Foundations of Computer Graphics class. This was really an excellent class, both in terms of the structure of the lectures and the homework assignments, which actually scaled in a non-linear way (the last assignment took as much time to complete as the previous three combined). But you were eased into it and didn’t strictly need to do the last piece of homework (which was to write a ray-tracer from scratch) to get a “pass” on the course.
What’s nice about these MOOCs is that – in order to scale, putting the M in MOOC – they mostly have either peer review or – and this was true of the two classes I’ve taken so far on Coursera and edX – use an automatic grader that gives you instant feedback on whether you’ve passed a particular test or homework assignment or not. And as there’s usually no limit to the number of times you can submit and check your answers, there’s really no reason not to get 100% (other than running out of either time or motivation).
If you’re interested in seeing more of the “bloopers” I showed in this previous post, here’s the final version (which doesn’t include any of the images that ended up passing the grader).
The ray-tracer itself was specified to work with just a couple of geometric primitives: triangles and spheres. Given the fact it already works with spheres, yesterday I decided to try it out with the web-service I created to generate Apollonian gaskets and packings (which served as the core for this mammoth series of posts from last year).
Here’s an animated GIF of the various levels and the way they came out:
Bear in mind that GIFs only have 256 colours, so these shots do include a few visual peculiarities due to that.
The ray-tracing application takes a scene file – which is simply a text file that contains information such as the positions of the camera and lights and the various properties of the geometry you want to show. I threw together a simple Python script that accesses the web-service for a certain recursion level and looks through the returned objects, generating a scene file that can drive the ray-tracer.
Each scene file the code creates contains a bunch of boilerplate data – aside from the camera and lights there are flat surfaces below and behind the spheres, to make it more visually interesting – before the definitions of the spheres themselves. I haven’t filtered the spheres to remove the ones not near the outside (as I did with visualizations of this data on other platforms), so the scenes are heavier than they need to be. But they all rendered in an acceptable amount of time, so I didn’t worry about that.
Here’s the Python code that I used to generate the scenes driving the ray-tracer, which – to be more visually interesting – borrows some material definitions from this page.
import urllib
import json
laycols = {0 : "obsidian",
1 : "ruby",
2 : "gold",
3 : "emerald",
4 : "turquoise",
5 : "chrome",
6 : "obsidian",
7 : "jade",
8 : "pearl",
9 : "silver",
10 : "copper"
}
matdefs = {
# name : (ambient,diffuse,specular,shininess)
"emerald" : ((0.0215,0.1745,0.0215),
(0.07568,0.61424,0.07568),
(0.633,0.727811,0.633),
0.6),
"jade" : ((0.135,0.2225,0.1575),
(0.54,0.89,0.63),
(0.316228,0.316228,0.316228),
0.1),
"obsidian" : ((0.05375,0.05,0.06625),
(0.18275,0.17,0.22525),
(0.332741,0.328634,0.346435),
0.3),
"pearl" : ((0.25,0.20725,0.20725),
(1,0.829,0.829),
(0.296648,0.296648,0.296648),
0.088),
"ruby" : ((0.1745,0.01175,0.01175),
(0.61424,0.04136,0.04136),
(0.727811,0.626959,0.626959),
0.6),
"turquoise" : ((0.1,0.18725,0.1745),
(0.396,0.74151,0.69102),
(0.297254,0.30829,0.306678),
0.1),
"brass" : ((0.329412,0.223529,0.027451),
(0.780392,0.568627,0.113725),
(0.992157,0.941176,0.807843),
0.21794872),
"bronze" : ((0.2125,0.1275,0.054),
(0.714,0.4284,0.18144),
(0.393548,0.271906,0.166721),
0.2),
"chrome" : ((0.25,0.25,0.25),
(0.4,0.4,0.4),
(0.774597,0.774597,0.774597),
0.6),
"copper" : ((0.19125,0.0735,0.0225),
(0.7038,0.27048,0.0828),
(0.256777,0.137622,0.086014),
0.1),
"gold" : ((0.24725,0.1995,0.0745),
(0.75164,0.60648,0.22648),
(0.628281,0.555802,0.366065),
0.4),
"silver" : ((0.19225,0.19225,0.19225),
(0.50754,0.50754,0.50754),
(0.508273,0.508273,0.508273),
0.4)
}
recurse = 1 # recursion depth: how many levels to create
print "size 640 480\noutput apollonian%d.png" % recurse
print "camera 0 -3 2.1 0 -1 1 0 1 1 45"
print "point 1 1 3 0.5 0.5 0.5\npoint -1 -1 5 0.5 0.5 0.5"
print "ambient 0 0 0\nshininess 10\nspecular 0 0 0"
print "emission .1 .1 .1"
print "maxverts 8"
print "vertex -1 -1 -1\nvertex +1 -1 -1\nvertex +1 +1 -1"
print "vertex -1 +1 -1\nvertex -1 -1 +1\nvertex +1 -1 +1"
print "vertex +1 +1 +1\nvertex -1 +1 +1"
print "pushTransform\nscale 2 2 .1\ntranslate 0 -0.5 -4.3"
print "diffuse .3 0 0"
print "tri 0 1 5\ntri 0 5 4\ntri 3 7 6\ntri 3 6 2\ntri 1 2 6"
print "tri 1 6 5\ntri 0 7 3\ntri 0 4 7\ntri 0 3 2\ntri 0 2 1"
print "tri 4 5 6\ntri 4 6 7"
print "popTransform\npushTransform"
print "translate -10 10 -10\nscale 30 .1 20\ndiffuse .3 .3 .3"
print "tri 0 1 5\ntri 0 5 4\ntri 3 7 6\ntri 3 6 2\ntri 1 2 6"
print "tri 1 6 5\ntri 0 7 3\ntri 0 4 7\ntri 0 3 2\ntri 0 2 1"
print "tri 4 5 6\ntri 4 6 7"
print "popTransform\npushTransform\ntranslate 0 0 0.6"
url = "http://apollonian.cloudapp.net/api/spheres/1/%d" % recurse
u = urllib.urlopen(url)
jdata = json.loads(u.read())
for jd in jdata:
amb,dif,spec,shin = matdefs[laycols[jd['L']]]
print "ambient %f %f %f" % (amb[0],amb[1],amb[2])
print "diffuse %f %f %f" % (dif[0],dif[1],dif[2])
print "specular %f %f %f" % (spec[0], spec[1], spec[2])
print "shininess %f" % (shin * 128)
print "sphere %f %f %f %f" % (jd['X'],jd['Y'],jd['Z'],jd['R'])
print "popTransform"
The scene file gets printed to the standard output, so I simply used “python apollonian-scene.py > test.scene” to pipe the results into a file and then ran “raytrace test.scene” to generate the PNG image specified by the output command in the scene file (if I’d implemented the ray-tracer differently I’d have been able to pipe the scene in via the standard input, but hey).
Once this year’s AU is out of the way I’m going to have to find another class to sign up for. I’m definitely turning into a MOOC addict. :-)
Update:
There was a saturation glitch in the first images I generated (the areas with the highest reflections were going being 1.0 in either R, G or B leading to colours being a bit off). I’ve fixed the animation and reposted it.