I’ve been chewing on this for some time, now, but I’ve decided it’s time to act. Well, as soon as AU is over I’ll act, anyway. Which I expect means it’ll morph into a New Year’s resolution for 2015. :-)
Back when this blog was launched, the Git project was still relatively young. But it’s clearly become the version control technology to use, especially when putting code out there in the open. And this blog is all about putting stuff out there in the open, after all.
Autodesk is using GitHub for our PaaS samples – which for now includes those for the Viewing & Data web-service and AutoCAD I/O – and it’s being internally used for more and more activities, elsewhere.
I’ve used GitHub a little bit for my own viewer samples – and I really like the fact Heroku links to it, to pull down the source to build the web-site/-service – but I feel the time has come to dive in more deeply and use the technology more.
In preparation for this, I’m currently on chapter 2 of Pro Git – it’s available for free as an e-book, so I just emailed the .mobi version to my Kindle – and it seems to be a great resource. I’ve already learned a lot from the first chapter and a half.
My plan is to take the various samples I’ve created for this blog over the last 8.5 years and manage them using GitHub, allowing others to contribute fixes. The vast majority of these samples are single files that will just be part of a main aggregator project (that’s how I have them on my system: I have a main project that I add the various C# files into when I develop and later need to test them), but there will be some additional standalone projects, too.
This is perhaps a bigger job than you might think, for a few reasons. I started to work through the files I have in my aggregator project, but found it was taking too long: I’ve somewhat foolishly polluted the project with code people have sent me to test issues, over the years, so I can’t just publish them all, as-is.
I’ve decided I’m going to take another track, to use the Typepad API to access post information, extract the HTML content – and the sections of code from them – and compare them programmatically against the files I have locally. This will at least allow me to take all the “valid files” and create a list of the ones that need manual intervention of some kind. At least that’s the plan – we’ll see how and when it gets completed. I think (or hope) it’ll be worth the effort.