It's a little surreal, that after so many years of first developing that, people are still finding it as useful as ever. Perhaps I should try my hand at an HTML/JS version sometime.
This morning, I worked on a new way of deploying media on scrumdo.com. When I originally set it up years back, we uploaded all of our media to Amazon S3, pointed a Cloudfront (their CDN) distribution at it, and was done with it.
There are some hack-ish ways of making S3/Cloudfront gzip content, but they seemed more trouble than their worth.
A while back, Amazon introduced custom origin's for cloudfront. What that means, is you could point cloudfront at your own web server instead of an S3 bucket. Any headers & content your server sent out would be cached in cloudfront for you. So today, I decided to start taking advantage of that.
First, I set up my Django app to collect it's static files into apache's web directory.
STATIC_ROOT = "/var/www/html/static"
Then, I set up Apache to actually serve those files.
The AddOutputFilterByType directives tell apache to compress those MIME types.
The AliasMatch line makes any request to /static/v/ be served directly by apache instead of the Django app. I added a wildcard in there so I can set long expiration dates and version the media via url. This means any of these url's point to the same file:
The Django app is smart enough to change it's STATIC_URL param based on some deployment options to take advantage of this. Overall, this reduces the amount of If-Changed-Since requests with 304 responses. It has the down-side of effectively invalidating ALL cached files when we put up a new release. (I know there are more elegant solutions here, but this is good enough for us for now)
Next, I set up a new cloudfront distribution to point to my ELB (Amazon Elastic Load Balancer)
If you click that link, here's what happened:
In our next major release, we're combining and minifying a bunch of CSS and JS files into a single file (for each type) using grunt.js, which should help speed up the page loads even more.
Dealing with merge conflicts with git isn't too hard, but I keep losing work because other people do it incorrectly. Here's the easiest way to do it:
First, set up a merge tool. I like p4merge since I'm used to it, but feel free to grab any that git supports. Here a link on how to do that.
Now, every time you want to inflict your changes on others, follow this procedure...
git add (your files)
This causes any changes you've made to go into your staging area.
This causes your changes to go into your local repository. Make sure to put a meaningful commit message in!
This goes out and grabs everyone else's changes since you last synced. If you get a merge-conflict message, do the next two steps. If not, skip down to the push.
This will step you through the files that had merge conflicts, one by one, and help you fix them. Assuming you set up a merge tool, you'll have a nice graphical way of doing this. Make sure to save each file after you fix it.
This commits your merge. You will see ALL the files that you and the other person edited here, not just the files you edited. That's normal. Don't try to remove the other person's changes.
This is the one that people mess up all the time, especially when using a graphical git tool. They might see dozens of files they never touched. Panic. And try to revert them all. But by doing that, they are telling git that they know better than it and this merge shouldn't contain those changes, effectively losing all the changes the other person made.
This pushes the changes you made, and the merge you just did to the central repo for everyone to get.
I've been spending a lot of time improving ScrumDo recently, unfortunately the rate at which we've been able to release new & updated features has been slowing.
The last major change we made was a great new search interface. What should have been a few days, maybe a week tops, took almost a month to complete. I was awfully bummed by that, so after we were done with that, I took a step back to try and figure out why that happened.
I've come to the conclusion that the way I've been developing web apps just isn't scalable for RIA's (Rich Internet Application). Haven't heard that term in a while, but Rich Internet Apps are what we're making these days. No more render page, click, render page, click render page. More and more logic is done browser side and trying to maintain the old paradigm of page loads is really hurting.
So, here's the new plan.
When doing Flex development, I just do all that. There's no thinking about it, it's just the way it's done. I'm not sure why it wasn't obvious here. So my new technology stack looks something like this:
Django Piston for the API - Originally, we were using TastyPie, but I really prefer Piston's way of thinking about resources. There's less "magic" and more explicit definition.
For the client, I'm using Backbone, Underscore, Handlebars templates, and tying it all together with Coffeescript. To make the workflow efficient, we're using Grunt.js to manage all that.
August 3rd was my last day as a regular full time employee, with any luck forever. I'll be running a consulting/contract software development business (of one). So far I've signed a big enough contract to keep me busy for about a year. After that, I hope that ScrumDo and a few other ventures will have grown enough to pay the bills.
Wish me luck!