Just a quick note to help anyone furiously googling. I recently upgraded an application from ember-cli 0.44 to ember-cli 0.1.2, which entialed updating the included ember-data from ember-data 1.0.0-beta.8 to 1.0.0-beta.10. I had a couple of small issues while upgrading - most of which were well answered by the documentation. However, I did run into one mystery issue. This issue is relevant to the RESTAdapter.
On one controller, I have a ‘createPlan’ action which has the following relevant code:
controller
1234567891011121314
actions:{createPlan:{varparentObject=this.get('aSelectedBoundProperty');varplan=this.store.createRecord('plan',{anAttribute:"value",parent:parentObject});plan.save().then(function(plan){//Do some stuff, including a route transition});}}
All pretty standard stuff. In any case, the code worked, except when Ember-Data used the RESTAdapter to persist the record, it did not send a parent: <id> param in the request body. I tried debugging this in every way I could think of; the property was certainly being set on the plan model.
In any case, the fix was to upgrade to ember-data 1.0.0-beta.11. I can confirm that this issue no longer exists in beta.11. However, on the off chance there are people stuck trying to upgrade to beta.10, I hope this will help - since it is no longer an existing bug, I didn’t feel comfortable submitting it to the bug tracker.
TL;DR: If you’re using vim with ember-cli, move your .swp files out of your project directory by adding set directory=~/.vim/_tmp// to your .vimrc.
I’ve recently decided to revisit ember.js and work up a little prototype application with it. The last time I tried ember seriously was just before version 1.0.0 - I think it was 1.0-rc1 or so.
Since then, the tooling has come a long way, and I’ve had a great experience getting up and running with ember development using the ember-cli set of tools. Being able to get a functioning server running with just $ ember server is pretty fantastic, and has gone a long way to avoiding problems that usually come with working on static sites using a file:/// url.
However, I did run into a small problem - ember-cli’s Broccoli builder re-builds the project on any change in the project directory - including changes to ‘hidden’ files, like the .filename.swp and .filename.swo files that vim creates automatically to store temporary changes.
As such, whenever I saved a change, Broccoli would build the app, which takes about 300ms, then automatically reload the project in my browser, then the .swp file would change, and Broccoli would build the project again, and then the browser would reload again, a full second later - and inevitably after I had already started interacting with the application.
I was lucky enough to catch Robert Jackson (@rwjblue), an ember core team member, on twitter, and he helpfully told me that (a) there is no way to have Broccoli ignore a file type, as of now, and that (b) I could move my .swp files out of the project directory by setting directory variable in vim. He was even kind enough to link me to his dotfile on github, which contained the following two lines:
set directory=~/.vim/_tmp//
set backupdir=~/.vim/bakcup//
Adding these two lines to the .vimrc file causes vim to store all of its temporary files in the specified directories, not your project directory - so Broccoli has nothing to mistakenly rebuild off of.
And, thanks to Robert and the ember team for all their work on ember and ember-cli!
Contributing code and maintenance effort to open source projects is a great way to give back to the software community. Making your pull requests able to be automatically merged saves upstream maintainer’s time and makes the process more efficient. But how can you send a single commit’s worth of code in a pull request?
The easiest way I have found to do this is by using a special upstream branch and the git cherry-pick command.
I have contributed to the jStat project, so far mostly in a bug fix / testing capacity. Here’s how I send a PR with a single commit’s worth of code. I have a personal fork of the project for contributions - the fork & merge development model is the preferred method for open source projects on GitHub.
First, I code on whatever feature / bugfix / testing branch I want to. I am sure to keep local changes (for example, I use a slightly different Makefile than the default) out of commits that contain code for contribution.
When I am ready to submit a PR, I run git fetch https://github.com/jstat/jstat master. This pulls the master branch of the main project - not my personal fork - and labels it as FETCH_HEAD.
Then, I can run git checkout FETCH_HEAD && git checkout -b upstream. This creates a branch called upstream that references the main project’s master branch, as opposed to my personal fork.
Once on the upstream branch, I can copy over the commit I want with git cherry-pick <hash>. This applies the single commit to the upstream branch. I run the tests again just to double check, since there could have been changes in the master project since I started working on my contribution.
Finally, I send the commit to the remote GitHub repository with git push origin upstream --force. Once it is uploaded, I can create a PR with jstat/jstat:master, the main project, as the base reference, and jamescgibson/jstat:upstream as the branch to merge, and only the commit I cherry-picked will be included.
This is the easiest way I’ve found to reliably create single-commit, automatically-mergeable pull requests, which should make the project maintainer’s life easier. It also lets me keep my contribution overhead totally separate from any personal changes I make (e.g. editing the makefile for my machine’s peculiarities).
Building a new web application today usually starts in the same way: set up a skeleton of an application by running your favorite framework’s generator script, cloning a repository, or copying a custom built starter project. All of these options are great for getting your application up and running quickly. There are few greater joys than being able to go from an idea to a functioning local web server with your new project’s name in less than thirty seconds.
But building applications this way can make us blind to important design decisions. These decisions get made regardless of whether we consider them thoroughly, as our starter scripts will do their best to pick a reasonable default. Even if this does not cause headaches today, it might cause significant problems later on.
In particular, Ruby on Rails (which is my preferred framework) and other like-minded server side frameworks usually make some strong assumptions about your back end. With Rails, we default to SQL databases.
As evidence, consider that while the ‘Rails’ gem has 38 million downloads on RubyGems.org, the ‘pg’ gem required for PostgreSQL has 10.8 million and the ‘mysql2’ gem for MySQL and MariaDB has 12 million. SQLite 3, which is the actual rails default for development, has 10.8 million downloads. The most popular NoSQL backend gem, Mongo, has just 4 million.
There is not a perfect correlation between downloading the Rails gem, downloading a database driver, and creating an application. I suspect that of production rails applications, very few are running on SQLite, and a relatively higher proportion on Mongo and PostgreSQL than the download numbers would suggest. That said, I believe it is clear from the data that for most developers, ‘Rails’ implies ‘SQL’ - probably without careful thought.
Is every application best off with an SQL backend? Trivially no, as evidenced by the existence of Mongo. A more interesting question is, ‘is every application best off with a database?’. I think the answer is still no.
There are plenty of reasons to use a standalone database server as your back end. Most developers are familiar with database systems, libraries and documentation are widely available, performance is good, and since the mid 2000s, libre and free database systems can match the performance of high priced proprietary systems like Oracle Database. In addition, for applications that scale out, using a standard database package can help isolate and conceptualize concurrency problems by moving all persistence to a single, stand-alone subsystem.
As an exploration of no-database web applications, I wrote James C Gibson as a Service without a database. The web server keeps all of the business data in memory, and serializes all updates to flat files. When the server is started, the logs are read and objects read back into memory.
All of the persistence logic is hidden behind a set of Serializer classes and data manager classes, which together hide the JSON serialization and provide a set of ActiveRecord-style finder methods. Instead of Foo.find_by_field(), I have FooManager.find_by_field().
There are several benefits to this setup.
First, my design is not constrained by database schema. Some of the objects I serialize have nested arrays and hashes that would be somewhat difficult to arrange in a normal ActiveRecord/SQL setup - with this design, I don’t need to think about tables when I am designing business objects. I do not have any evidence to suggest that this freedom improves design, but all else equal, I believe that removing constraints from developers will probably do more good than harm.
Second, my ‘business logic’ objects can be tested in perfect isolation, since they do not have any outside dependencies at all - as such, I can test all of the non-persistence logic for the entire application, which is a few dozen tests, in less than 150ms. This is a speed I could only dream of on my current major rails project - the test suite for UpsideOS takes a few minutes to run. The only untested code is that dedicated to reading and writing the flat storage files, which are only a few lines, and as such easy to verify by hand.
Third, it will be very easy to move to a different storage engine if that becomes necessary. Since all of my persistence code is isolated in a few, well defined, single-job methods, switching to say ActiveRecord would be a simple matter of creating the appropriate ActiveRecord classes and then setting my own finder methods up to call the appropriate AR methods. It would be equally easy to switch to Mongo, or even a nontraditional database like ElasticSearch. Certainly, defining my own finder methods at the beginning took a little longer than using AR from the start would have, but being able to any storage back end at all with a fixed, small amount of effort is worth the initial investment.
Finally, I also avoid the operations overhead of maintaining database servers. The current JCGaaS website is a single ruby process running on a single server. Setting the server up was as simple as installing Ruby, cloning a repository, and adding the right script to the startup list - so quick and easy that I can automate it with a shell script, instead of relying on something like Puppet or Chef. Backing up the system is equally easy, as all I need to do is synchronize a single file system - currently, I have a cron job to run rsync to another server every hour, but in theory I could even do something as easy as just mounting the right folder to DropBox.
I don’t think my current JSON code will scale well to millions of requests, but by avoiding the extra overhead and pushing all HTML rendering onto the client, I have been able to achieve 1000 request per second throughput on benchmarks, which is more than enough for a minimum viable product in nearly any field. And, if the time ever comes to scale the application, it will be easy to switch out the persistence engine.
Building an application without a database was an instructive and refreshing exercise - I highly recommend it as a tinkering project for all web developers. And, consider alternate storage systems next time you find yourself running rails new.
If you are trying to install elasticsearch on Ubuntu, you may run into a mysterious error where running the suggested
sudo /etc/init.d/elasticsearch start
appears to succeed, but no process is bound to port 9200, and the elasticsearch logs remain empty.
If this is the case, I suggest running the elasticsearch binary directly.
If you installed elasticsearch from the .deb installer, the elasticsearch binary is found at /usr/share/elasticsearch/bin/elasticsearch.
In my case, running the elasticsearch binary directly on my Ubuntu machine threw a Java ‘Unsupported major.minor version 51.0’ exception.
This occurred immediately, apparently before logging started, so /var/log/elasticsearch remained empty.
It turns out that as of elasticsearch 1.2, which was released fairly recently, elasticsearch requires Java 7, but Ubuntu’s default-jre package ships Java 6.
To rectify this situation, install Java 7 (you can either use the openjdk-7-jre package or the oracle-java7-installer provided by ppa:webupd8team/java.)
After installing Java 7, elasticsearch should start correctly, either directly from the binary or via the init script.