JSON

Why XML is far superior to JSON

Tagged:  

I've read an article titled Does XML have a future on the web?, and it does not surprise me, because the author himself is the creator of JSON. Naturally, he would love to promote his stuff. I would like to ignore him but would like to address his questions.

Lets see what he got.

"For data transfer applications, XML is losing ground to JSON because JSON is simply a better data transfer format".
I want to know how? Everyone in the world knows XML is more recognized and widely used and it is well supported by every one of the vendors in the web world (Editors, Application Servers, Parsers, Web Servers, Loaders etc.). JSON's role is what? Simply converting raw text in to a JavaScript object. And how is that a better data transfer format? Can I take the same raw text and use it else where? I don't think so.

I heard there are some frameworks available for JSLT (yeah you heard it right, JavaScript - SLT) available as open source, still I would challenge the performance about those frameworks. I haven't tried them myself, but you cannot beat the performance provided by the native browser XSLTs. I could reuse same XML data and apply different XSLTs to get different sets of transformed structures. You can't even think about this using JSON. And that to you can transform using XSLT's in milliseconds. With JavaScript JSON, you'll be lucky if you can get it in seconds.

Also, what about reusability? My server still can send the same XML and I could put a SOAP wrapper on top of that and I could expose it to web services. Now anybody can invoke that web service. Not necessarily only on the web side, it could be invoked by any server side programs as well. XML is the de-facto standard for Data integration projects, Data warehousing projects. More and more corporations are moving towards a SOA (Service oriented architecture) model and more and more applications have to use XML as the data transport mechanism.

JSON may be used by applications which are not enterprise level, only for applications which deal with less data, and have no need for extensibility. But for more scalability, more robustness, and more extensibility, XML is your best bet. XML has more support and that's what your boss would like to hear.

Get Insight into Digg's Bury System with Ajaxonomy's Bury Recorder

Tagged:  

If you have been using the popular service Digg you know that it is very easy to submit a story and to see it start to gain traction just to be buried into the dark abyss. What I find particularly frustrating is that you don't know how many people buried the story and the reason for the bury. If you have seen Digg Spy you have noticed that the application does show buries, but you can't just track data for a particular story.

After much frustration Ajaxonomy is now releasing a Bury Recorder application. How the application works is you take the story's URL (This is the URL of the page that the "more" link on the Digg upcoming/popular pages takes you or the page that clicking on the story title takes from your profile i.e. http://digg.com/[story]) and put it into the application and once you click "Watch for Buries" the application will start recording any buries that the story receives. This will allow you to see if your story had 100 diggs and 5 buries before it was permanently buried, or if it was more like 100 diggs and 300 buries. The idea is that you would submit a story and then have the recorder capture any buries from the time that you start the application watching for buries. You'll want to note that in this Beta 1.0 release, so currently you have to leave your machine on and the application open in order to make sure that it continues to capture buries.

Before I go a bit into the design and more information on using the application I wanted to say that the application is open source and can be changed and put on your server. If you do change it and/or put it on a different server then we just ask for a link back to us and credit us for the initial creation of the application. Also, if you do decide to put it on a server, let us know and we might link to your server as another option to elevate traffic concerns on our server.

So, now that I have you are excited you will want the link to the application. Click here to be taken to Bury Counter Application.

The following is a quick overview on how to use the application so that it will make it a bit less confusing to use (more than likely most people could figure it out, but this way if it looks like it isn't working you have somewhere to look).

Using the application is as easy as one two three, however there are two ways to use the application below is the first way of using the application.

  1. Open the application (once again the link to the application is here)
  2. Copy and paste the URL of the story into the text box (i.e. http://digg.com/[story])
  3. Click the "Watch for Buries" button and then let the application start recoding buries (make sure not to close the application or to turn off/hibernate your computer)

The other way to use the application is as easy as one two (yep, there is no three using this method). Before using the below steps you will need to create a bookmarklet which can be done by following the directions at the bottom of the application.

  1. Click on the bookmarklet from the story page on Digg (this has to be the page that you get when you click on the "more" link in Digg [or from your profile page it would be the page that clicking on the title of the story takes you] which is the page that you would use to get the URL for the first method)
  2. Click the "Watch for Buries" button and then let the application start recoding buries (make sure not to close the application or to turn off/hibernate your computer)

Now that you know how to use the application I will go a bit into how the application was created. This application gets the JSON Feed used by Digg Spy. It does this using Ajax (i.e. the XMLHTTPRequest object) which requires a server side proxy due to domain security restrictions. Due to the way that the JSON is returned from Digg Spy, it doesn't set a variable equal to the returned object, which force us to use the before mentioned server side proxy and an eval statement instead of using DOM manipulation. The application simply polls for updated data every 20 seconds which makes sure we don't miss any data and that it doesn't put too much strain on the server.

You can download the full source code for this Beta 1.0 release here.

This release has been tested in Firefox 2.0.0.11 and Internet Explorer 7. It should work in many more browsers, but has not yet been tested. If you try it in a different browser and either find bugs or the application works perfectly then we would appreciate if you contact us regarding your testing results.

Also, if you do any cool things with the application or if you have any cool ideas then feel free to blog about it on this blog. Just sign up for a free account and once you login click on "Create Content" => "Blog Entry" and then write your post. If the admins of this site feel that your post is an interesting one they will move it to the home page.

I hope that you find this application useful and that you keep checking for new version and improvements.

Object Oriented JavaScript - Should You Use It? - Part 2

This is the continuation of a post that I wrote yesterday (click here to read the original post). This post will go into much more depth and will be a bit more technical.

Now to begin to answer the question of should you use object oriented JavaScript (don't worry I will touch on the fact that we have all already used JavaScript objects). The first thing that we should understand is how it is used and what are the advantages or disadvantages. If you have been programming in a language like C++ or Java then you are use to a class style of object oriented. JavaScript does not use this structure, instead an object in JavaScript is based on functions and properties.

JavaScript object oriented programming can be noted in different styles.

The first type of notation is to use the new operator along with the Object() method.

person = new Object()
person.name = "John Doe"
person.height = "6Ft"

person.run = function() {
	this.state = "running"
	this.speed = "4ms^-1"
}

In the above code we define an object named person and then add it's own properties. Also the property person.run will execute the function.

The next notation will be familiar if you have ever used JSON. The below notation is referred to as literal notation. This notation simplifies things a bit and this is much better for sending over the web in an Ajax application where such things really matter.

var rectangle = { 
	upperLeft : { x : 2, y : 2 },
	lowerRight : { x : 4, y : 4},
	method1 : function(){alert("Method had been called" + this.upperLeft.x)}
};

The shortcoming with this method is that it does not lend as well to re-usability.

So far you are probably thinking that this is a waste of time and that why would I ever use this, outside of JSON. Well, now we will start to see the power of this coding style which is in re-usability.

The below example will create an object and will set the value of the name property.

function cat(name) {
	this.name = name;
	this.talk = function() {
		alert( this.name + " say meeow!" )
	}
} 

cat1 = new cat("felix")
cat1.talk() //alerts "felix says meeow!"

cat2 = new cat("ginger")

The above code shows how you can easily create multiple objects based on the same object. Which brings me to how you have already used object oriented JavaScript. For example the document.getElementById() method is an example of an object that you more than likely have used (don't worry Prototype library lovers, I'll touch on $() in just a moment).

One of the great things with objects is that using prototype (if you have been programming in ActionScript you will be familiar with the below) you can now extend the functionality of an existing object in a new object.

The below code is an example of how to use prototype to extend an object.

cat.prototype.changeName = function(name) {
	this.name = name;
}

firstCat = new cat("pursur")
firstCat.changeName("Bill")
firstCat.talk() //alerts "Bill says meeow!"

Using prototype you can extend existing JavaScript objects, such as the date (I believe that this is how the prototype library creates the $() method which is in essence the document.getElementById() method) object.

The below example shows how to extend array object with the shift and unshift methods that are not available in some browsers.

if(!Array.prototype.shift) { // if this method does not exist..

	Array.prototype.shift = function(){
		firstElement = this[0];
		this.reverse();
		this.length = Math.max(this.length-1,0);
		this.reverse();
		return firstElement;
	}
	
}

if(!Array.prototype.unshift) { // if this method does not exist..
	
	Array.prototype.unshift = function(){
		this.reverse();
		
			for(var i=arguments.length-1;i>=0;i--){
				this[this.length]=arguments[i]
			}
			
		this.reverse();
		return this.length
	}
}

If you have been programming in a language link C++ or Java you are probably very familiar with a class structure. Part of this is the idea of a class and a subclass. While most JavaScript object oriented code will not use the following, below is an example of how you can create classes and subclasses in JavaScript.

function superClass() {
  this.supertest = superTest; //attach method superTest
}

function subClass() {
  this.inheritFrom = superClass;
  this.inheritFrom();
  this.subtest = subTest; //attach method subTest
}

function superTest() {
  return "superTest";
}
  
function subTest() {
  return "subTest";
}


 var newClass = new subClass();

  alert(newClass.subtest()); // yields "subTest"
  alert(newClass.supertest()); // yields "superTest"

So, now that you have seen how you can extend and reuse JavaScript objects it still leaves us with the question of should you use it. If you need to create code that needs to be reused or extended then it should be a JavaScript object. If you are just writing code that does a few lines of code that doesn't need to be reused or extended then procedural (a.k.a. typical function style code) style code is fine to use. So, the answer to the question is yes in many cases although there are times when it may be overkill.

The next post will take a look at how JavaScript will soon be changing once (or some would say if) ECMAScript 4 starts getting major browser support.

So, as always if you have any questions leave a comment or you can private message me once you add me to your buddy list (I will add you as a buddy as soon as I see the request) that is available once you login. Also, if you would like to write anything on this blog you can do so once you login by clicking on create content and then blog entry. The most interesting posts will be promoted to the home page.

many of the above code samples where taken from this post on JavaScript Kit.

DWR 3.0 vision

DWR 3.0 is going to be released soon. Following are vision from Joe, Founder of DWR

DWR 2.0 has been out for 6 months or so. At the time, I swore that the next release would be a small one, called 2.1. However it appears that I’m not good at swearing because there is lots in the next release - I think we’re going to have to call it 3.0.

Since 2.0, we've been working on the following adding support for JSON, Bayeux, images/binary file upload/download, a Hub with JMS/OAA support and more reverse ajax APIs. I also want to get some Gears integration going.

There are also a whole set of non-functional things to consider:
* Moving the website to directwebremoting.org
* Restart chasing CLAs, using a foundation CLA rather than a Getahead CLA
* Get some lawyer to create a CLA so Getahead can grant rights to the Foundation (or something similar)
* Get someone to pony up and let us move to SVN
* Unit tests

JSON support: One goal is a RESTian API so you can do something like this: http://example.com/dwr/json/ClassName/methodName?param1=fred;param2=jim and DWR will reply with a JSON structure containing the result of calling className.methodName("fred", "jim"); It would be good to support JSONP along with this. We might also allow POSTing of JSON structures, although I’m less convinced about this because it quickly gets DWR specific, and then what’s the point of a standard. Status - DWR has always used a superset of JSON that I like to call JavaScript. We do this to cope with recursive data, XML objects, and such like. I’ve done most of the work so that DWR can use the JSON subset, but not created the ‘handler’ to interface between the web and a JSON data structure.

Bayeux Support: Greg Wilkins (Jetty) committed some changes to DWR, which need some tweaks to get working properly. Greg still intends to complete this.

File/Image Upload and Download: This allows a Java method to return an AWT BufferedImage and have that image turn up in the page, or to take or return an InputStream and have that populated from a file upload or offered as a file download. I’ve had some bug reports that it doesn’t work with some browsers, also we need to find a way to report progress to a web page simply.

DWR Hub and integration with JMS and OpenAjax Hub: We have a hub, along with one way integration with JMS. The OpenAjax portion will be simple except for the getting the OpenAjax Hub to work smoothly with JMS part. Much of this work has not hit CVS yet, but will do soon.

Reverse Ajax Proxy API Generator: The goal with this is a program that will take JavaScript as input, and output a Java API which, when called, generates JavaScript to send to a browser. Some of this work has been tricky, but then meta-meta-programming was always bound to be hard. This currently mostly works with TIBCO GI, but more work will be needed to allow it to extract type information from other APIs.

DOM Manipulation Library: Currently this is limited to window.alert, mostly because I’m not sure how far to take it. There are a set of things like history, location, close, confirm that could be useful from a server, and that are not typically abstracted by libraries.

Gears Integration: I’ve not started this, but it needs to take higher priority than it currently does. It would be very cool if DWR would transparently detect Gears, and then allow some form of guaranteed delivery including resending of messages if the network disappears for a while.

Website: We need to get the DWR website moved away from the Getahead server, and onto Foundation servers. There will be some URLs to alter as part of this, and I don’t want to lose Google juice by doing it badly.
The documentation for DWR 2 was not up to the standards of 1.x, and while it has been getting better, we could still do more. One thing that has held this back has been lack of a DWR wiki. I hope we can fix this with the server move.

Source Repo: We are currently using CVS hosted by java.net (which is a collab.net instance - yuck). They support SVN, but want to charge me a few hundred dollars to upgrade. Maybe the Foundation can either ridicule them into submission or pay the few hundred dollars for the meta-data so we can host the repo. ourselves. The latter option is probably better.

Unit Tests: I've been trying for ages to find a way to automatically test with multiple browsers and servers. WebDriver looked good for a while, but it doesn't look like the project is going anywhere particularly quickly, so I'm back trying to get Selenium to act in a sane way.

XML versus JSON - What is Best for Your App?

Tagged:  

One of the biggest debates in Ajax development today is JSON versus XML. This is at the heart of the data end of Ajax since you usually receive JSON or XML from the server side (although these are not the only methods of receiving data). Below I will be listing pros and cons of both methods.

If you have been developing Ajax applications for any length of time you will more than likely be familiar with XML data. You also know that XML data is very powerful and that there are quite a few ways to deal with the data. One way to deal with XML data is to simply apply a XSLT style sheet to the data (I won't have time in this post to go over the inconsistent browser support of XSLT, but it is something to look into if you want to do this). This is useful if you just want to display the data. However, if you want to do something programmatically with the data (like in the instance of a web service) you will need to parse the data nodes that are returned to the XMLHTTPRequest object (this is done by going through the object tag by tag and getting the needed data). Of course there are quite a few good pre-written libraries that can make going through the XML data easier and I recommend using a good one (I won't go into depth as to what libraries I prefer here, but perhaps in a future post). One thing to note is that if you want to get XML data from another domain you will have to use a server side proxy as the browser will not allow this type of receiving data across domains.

JSON is designed to be a more programmatic way of dealing with data. JSON (JavaScript Object Notation) is designed to return data as JavaScript objects. In an Ajax application using JSON you would receive text through the XMHTTPRequest object (or by directly getting the data through the script tag which I will touch on later) and then pass that text through an eval statement or use DOM manipulation to pass it into a script tag (if you haven't already read my post on using JSON without using eval click here to read the post). The power of this is that you can use the data in JavaScript without any parsing of the text. The down side would be if you just wanted to display the data there is no easy way to do this with JSON. JSON is great for web services that are coming from different domains since if you load the data through a script tag then you can get the data without a domain constraint.

The type of data that you use for your application will depend on quite a few factors. If you are going to be using the data programmatically then in most cases JSON is the better data method to use. On the other hand if you just want to display the data returned I would recommend XML. Of course there may be other factors such as if you are using a web service, which could dictate the data method. If you are getting data from a different domain and JSON is available this may be the better choice. For Ruby on Rails developers, if you would prefer to use JSON and XML is all that is available the 2.0 release allows you to change XML into JSON. One of the biggest reasons that people use JSON is the size of the data. In most cases JSON uses a lot less data to send to your application (of course this may very depending on the data and how the XML is formed).

I would recommend that you take a good look at the application that you are building and decide based on the above which type of data you should deal with. There may be more factors than the above including corporate rules and developer experience, but the above should have given you a good idea as to when to use either data method.

If you would like to contact me regarding any of the above you can make me your friend on Social Ajaxonomy and send a message to me through the service (Click here to go to my profile on Social Ajaxonomy).

Rails 2.0 Finally Released - What's New

Ruby on Rails is one of the most used frameworks for new web 2.0 startups. This 2.0 release is the second recent present that we web developers have recieved this Christmas (the first was OpenID 2.0). Since Rails 2.0 was recently released I wanted to write about the recent changes.

Below is a rundown of the changes write from the Ruby on Rails blog.

Action Pack: Resources

This is where the bulk of the action for 2.0 has gone. We’ve got a slew of improvements to the RESTful lifestyle. First, we’ve dropped the semicolon for custom methods instead of the regular slash. So /people/1;edit is now /people/1/edit. We’ve also added the namespace feature to routing resources that makes it really easy to confine things like admin interfaces:

map.namespace(:admin) do |admin|
  admin.resources :products,
    :collection => { :inventory => :get },
    :member     => { :duplicate => :post },
    :has_many   => [ :tags, :images, :variants ]
end

Which will give you named routes like inventory_admin_products_url and admin_product_tags_url. To keep track of this named routes proliferation, we’ve added the “rake routes” task, which will list all the named routes created by routes.rb.

We’ve also instigated a new convention that all resource-based controllers will be plural by default. This allows a single resource to be mapped in multiple contexts and still refer to the same controller. Example:


  # /avatars/45 => AvatarsController#show
  map.resources :avatars

  # /people/5/avatar => AvatarsController#show 
  map.resources :people, :has_one => :avatar

Action Pack: Multiview

Alongside the improvements for resources come improvements for multiview. We already have #respond_to, but we’ve taken it a step further and made it dig into the templates. We’ve separated the format of the template from its rendering engine. So show.rhtml now becomes show.html.erb, which is the template that’ll be rendered by default for a show action that has declared format.html in its respond_to. And you can now have something like show.csv.erb, which targets text/csv, but also uses the default ERB renderer.

So the new format for templates is action.format.renderer. A few examples:

  • show.erb: same show template for all formats
  • index.atom.builder: uses the Builder format, previously known as rxml, to render an index action for the application/atom+xml mime type
  • edit.iphone.haml: uses the custom HAML template engine (not included by default) to render an edit action for the custom Mime::IPHONE format

Speaking of the iPhone, we’ve made it easier to declare “fake” types that are only used for internal routing. Like when you want a special HTML interface just for an iPhone. All it takes is something like this:

# should go in config/initializers/mime_types.rb Mime.register_alias "text/html", :iphone class ApplicationController < ActionController::Base before_filter :adjust_format_for_iphone private def adjust_format_for_iphone if request.env["HTTP_USER_AGENT"] && request.env["HTTP_USER_AGENT"][/(iPhone|iPod)/] request.format = :iphone end end end class PostsController < ApplicationController def index respond_to do |format| format.html # renders index.html.erb format.iphone # renders index.iphone.erb end end end

You’re encouraged to declare your own mime-type aliases in the config/initializers/mime_types.rb file. This file is included by default in all new applications.

Action Pack: Record identification

Piggy-backing off the new drive for resources are a number of simplifications for controller and view methods that deal with URLs. We’ve added a number of conventions for turning model classes into resource routes on the fly. Examples:


  # person is a Person object, which by convention will 
  # be mapped to person_url for lookup
  redirect_to(person)
  link_to(person.name, person)
  form_for(person)

Action Pack: HTTP Loving

As you might have gathered, Action Pack in Rails 2.0 is all about getting closer with HTTP and all its glory. Resources, multiple representations, but there’s more. We’ve added a new module to work with HTTP Basic Authentication, which turns out to be a great way to do API authentication over SSL. It’s terribly simple to use. Here’s an example (there are more in ActionController::HttpAuthentication):

class PostsController < ApplicationController USER_NAME, PASSWORD = "dhh", "secret" before_filter :authenticate, :except => [ :index ] def index render :text => "Everyone can see me!" end def edit render :text => "I'm only accessible if you know the password" end private def authenticate authenticate_or_request_with_http_basic do |user_name, password| user_name == USER_NAME && password == PASSWORD end end end

We’ve also made it much easier to structure your JavaScript and stylesheet files in logical units without getting clobbered by the HTTP overhead of requesting a bazillion files. Using javascript_include_tag(:all, :cache => true) will turn public/javascripts/.js into a single public/javascripts/all.js file in production, while still keeping the files separate in development, so you can work iteratively without clearing the cache.

Along the same lines, we’ve added the option to cheat browsers who don’t feel like pipelining requests on their own. If you set ActionController::Base.asset_host = “assets%d.example.com”, we’ll automatically distribute your asset calls (like image_tag) to asset1 through asset4. That allows the browser to open many more connections at a time and increases the perceived speed of your application.

Action Pack: Security

Making it even easier to create secure applications out of the box is always a pleasure and with Rails 2.0 we’re doing it from a number of fronts. Most importantly, we now ship we a built-in mechanism for dealing with CRSF attacks. By including a special token in all forms and Ajax requests, you can guard from having requests made from outside of your application. All this is turned on by default in new Rails 2.0 applications and you can very easily turn it on in your existing applications using ActionController::Base.protect_from_forgery (see ActionController::RequestForgeryProtection for more).

We’ve also made it easier to deal with XSS attacks while still allowing users to embed HTML in your pages. The old TextHelper#sanitize method has gone from a black list (very hard to keep secure) approach to a white list approach. If you’re already using sanitize, you’ll automatically be granted better protection. You can tweak the tags that are allowed by default with sanitize as well. See TextHelper#sanitize for details.

Finally, we’ve added support for HTTP only cookies. They are not yet supported by all browsers, but you can use them where they are.

Action Pack: Exception handling

Lots of common exceptions would do better to be rescued at a shared level rather than per action. This has always been possible by overwriting rescue_action_in_public, but then you had to roll out your own case statement and call super. Bah. So now we have a class level macro called rescue_from, which you can use to declaratively point certain exceptions to a given action. Example:


  class PostsController < ApplicationController
    rescue_from User::NotAuthorized, :with => :deny_access

    protected
      def deny_access
        ...
      end
  end

Action Pack: Cookie store sessions

The default session store in Rails 2.0 is now a cookie-based one. That means sessions are no longer stored on the file system or in the database, but kept by the client in a hashed form that can’t be forged. This makes it not only a lot faster than traditional session stores, but also makes it zero maintenance. There’s no cron job needed to clear out the sessions and your server won’t crash because you forgot and suddenly had 500K files in tmp/session.

This setup works great if you follow best practices and keep session usage to a minimum, such as the common case of just storing a user_id and a the flash. If, however, you are planning on storing the nuclear launch codes in the session, the default cookie store is a bad deal. While they can’t be forged (so is_admin = true is fine), their content can be seen. If that’s a problem for your application, you can always just switch back to one of the traditional session stores (but first investigate that requirement as a code smell).

Action Pack: New request profiler

Figuring out where your bottlenecks are with real usage can be tough, but we just made it a whole lot easier with the new request profiler that can follow an entire usage script and report on the aggregate findings. You use it like this:

$ cat login_session.rb get_with_redirect '/' say "GET / => #{path}" post_with_redirect '/sessions', :username => 'john', :password => 'doe' say "POST /sessions => #{path}" $ ./script/performance/request -n 10 login_session.rb

And you get a thorough breakdown in HTML and text on where time was spent and you’ll have a good idea on where to look for speeding up the application.

Action Pack: Miscellaneous

Also of note is AtomFeedHelper, which makes it even simpler to create Atom feeds using an enhanced Builder syntax. Simple example:


  # index.atom.builder:
  atom_feed do |feed|
    feed.title("My great blog!")
    feed.updated((@posts.first.created_at))

    for post in @posts
      feed.entry(post) do |entry|
        entry.title(post.title)
        entry.content(post.body, :type => 'html')

        entry.author do |author|
          author.name("DHH")
        end
      end
    end
  end

We’ve made a number of performance improvements, so asset tag calls are now much cheaper and we’re caching simple named routes, making them much faster too.

Finally, we’ve kicked out in_place_editor and autocomplete_for into plugins that live on the official Rails SVN.

Active Record: Performance

Active Record has seen a gazillion fixes and small tweaks, but it’s somewhat light on big new features. Something new that we have added, though, is a very simple Query Cache, which will recognize similar SQL calls from within the same request and return the cached result. This is especially nice for N+1 situations that might be hard to handle with :include or other mechanisms. We’ve also drastically improved the performance of fixtures, which makes most test suites based on normal fixture use be 50-100% faster.

Active Record: Sexy migrations

There’s a new alternative format for declaring migrations in a slightly more efficient format. Before you’d write:

create_table :people do |t|
  t.column, "account_id",  :integer
  t.column, "first_name",  :string, :null => false
  t.column, "last_name",   :string, :null => false
  t.column, "description", :text
  t.column, "created_at",  :datetime
  t.column, "updated_at",  :datetime
end

Now you can write:

create_table :people do |t|
  t.integer :account_id
  t.string  :first_name, :last_name, :null => false
  t.text    :description
  t.timestamps
end

Active Record: Foxy fixtures

The fixtures in Active Record has taken a fair amount of flak lately. One of the key points in that criticism has been the work with declaring dependencies between fixtures. Having to relate fixtures through the ids of their primary keys is no fun. That’s been addressed now and you can write fixtures like this:


  # sellers.yml
  shopify:
    name: Shopify

  # products.yml
  pimp_cup:
    seller: shopify
    name: Pimp cup

As you can see, it’s no longer necessary to declare the ids of the fixtures and instead of using seller_id to refer to the relationship, you just use seller and the name of the fixture.

Active Record: XML in, JSON out

Active Record has supported serialization to XML for a while. In 2.0 we’ve added deserialization too, so you can say Person.new.from_xml(“
David“) and get what you’d expect. We’ve also added serialization to JSON, which supports the same syntax as XML serialization (including nested associations). Just do person.to_json and you’re ready to roll.

Active Record: Shedding some weight

To make Active Record a little leaner and meaner, we’ve removed the acts_as_XYZ features and put them into individual plugins on the Rails SVN repository. So say you’re using acts_as_list, you just need to do ./script/plugin install acts_as_list and everything will move along like nothing ever happened.

A little more drastic, we’ve also pushed all the commercial database adapters into their own gems. So Rails now only ships with adapters for MySQL, SQLite, and PostgreSQL. These are the databases that we have easy and willing access to test on. But that doesn’t mean the commercial databases are left out in the cold. Rather, they’ve now been set free to have an independent release schedule from the main Rails distribution. And that’s probably a good thing as the commercial databases tend to require a lot more exceptions and hoop jumping on a regular basis to work well.

The commercial database adapters now live in gems that all follow the same naming convention: activerecord-XYZ-adapter. So if you gem install activerecord-oracle-adapter, you’ll instantly have Oracle available as an adapter choice in all the Rails applications on that machine. You won’t have to change a single line in your applications to take use of it.

That also means it’ll be easier for new database adapters to gain traction in the Rails world. As long as you package your adapter according to the published conventions, users just have to install the gem and they’re ready to roll.

Active Record: with_scope with a dash of syntactic vinegar

ActiveRecord::Base.with_scope has gone protected to discourage people from misusing it in controllers (especially in filters). Instead, it’s now encouraged that you only use it within the model itself. That’s what it was designed for and where it logically remains a good fit. But of course, this is all about encouraging and discouraging. If you’ve weighed the pros and the cons and still want to use with_scope outside of the model, you can always call it through .send(:with_scope).

ActionWebService out, ActiveResource in

It’ll probably come as no surprise that Rails has picked a side in the SOAP vs REST debate. Unless you absolutely have to use SOAP for integration purposes, we strongly discourage you from doing so. As a naturally extension of that, we’ve pulled ActionWebService from the default bundle. It’s only a gem install actionwebservice away, but it sends an important message none the less.

At the same time, we’ve pulled the new ActiveResource framework out of beta and into the default bundle. ActiveResource is like ActiveRecord, but for resources. It follows a similar API and is configured to Just Work with Rails applications using the resource-driven approach. For example, a vanilla scaffold will be accessible by ActiveResource.

ActiveSupport

There’s not all that much new in ActiveSupport. We’ve a host of new methods like Array#rand for getting a random element from an array, Hash#except to filter down a hash from undesired keys and lots of extensions for Date. We also made testing a little nicer with assert_difference. Short of that, it’s pretty much just fixes and tweaks.

Action Mailer

This is a very modest update for Action Mailer. Besides a handful of bug fixes, we’ve added the option to register alternative template engines and assert_emails to the testing suite, which works like this:

  1. Assert number of emails delivered within a block:
    assert_emails 1 do
    post :signup, :name => ‘Jonathan’

    end

Rails: The debugger is back

To tie it all together, we have a stream of improvements for Rails in general. My favorite amongst these is the return of the breakpoint in form of the debugger. It’s a real debugger too, not just an IRB dump. You can step back and forth, list your current position, and much more. It’s all coming from the gracious note of the ruby-debug gem. So you’ll have to install that for the new debugger to work.

To use the debugger, you just install the gem, put “debugger” somewhere in your application, and then start the server with—debugger or -u. When the code executes the debugger command, you’ll have it available straight in the terminal running the server. No need for script/breakpointer or anything else. You can use the debugger in your tests too.

Rails: Clean up your environment

Before Rails 2.0, config/environment.rb files every where would be clogged with all sorts of one-off configuration details. Now you can gather those elements in self-contained files and put them under config/initializers and they’ll automatically be loaded. New Rails 2.0 applications ship with two examples in form of inflections.rb (for your own pluralization rules) and mime_types.rb (for your own mime types). This should ensure that you need to keep nothing but the default in config/environment.rb.

Rails: Easier plugin order

Now that we’ve yanked out a fair amount of stuff from Rails and into plugins, you might well have other plugins that depend on this functionality. This can require that you load, say, acts_as_list before your own acts_as_extra_cool_list plugin in order for the latter to extend the former.

Before, this required that you named all your plugins in config.plugins. Major hassle when all you wanted to say was “I only care about acts_as_list being loaded before everything else”. Now you can do exactly that with config.plugins = [ :acts_as_list, :all ].

And hundreds upon hundreds of other improvements

What I’ve talked about above is but a tiny sliver of the full 2.0 package. We’ve got literally hundreds of bug fixes, tweaks, and feature enhancements crammed into Rails 2.0. All this coming off the work of tons of eager contributors working tirelessly to improve the framework in small, but important ways.

I encourage you to scourger the CHANGELOGs and learn more about all that changed.

Click here to read the full post on the Ruby on Rails blog.

There are a lot of big changes here that should be useful when developing using Rails. One of my personal favorites is the ability to change XML into JSON, which being someone that likes JSON could come in handy (especially if you are getting data from a web service that is using XML and your application needs to use JSON). I look forward to seeing what new applications will be built on Rails 2.0.

Special thanks to thegreatone who submitted the Rails 2.0 post from the Ruby on Rails blog on Social Ajaxonomy.
Click here to see the post on Social Ajaxonomy

If you would like to submit a post for chance to have us blog about it click here to go to Social Ajaxonomy or click on the "Social" link located at the top link navigation of Ajaxonomy.

rails.png

Convert RSS to JSON

Tagged:  

John Resig has written another great coding example. The code takes an RSS Feed and converts it into JSON. You will also notice that in the code he uses DOM manipulation instead of eval (you can read my post on using JSON without eval by clicking here) to bring the code into the JavaScript.

Below is an excerpt from the post.

Interface

This script currently has a REST interface, accessible via a GET request. The full request would look something like this:
GET http://ejohn.org/apps/rss2json/?url=URL&callback=CALLBACK
the URL parameter would contain the URL of the RSS/Atom feed which you are attempting to convert. The optional Callback parameter would reference a callback function that you wish to have called, with the new data.

You can test this out by visiting the following URL:
http://ejohn.org/apps/rss2json/?url=http://ejohn.org/index.rdf
Sample Code and Demo

A simple, sample, program would look something like this:
getRSS("http://digg.com/rss/index.xml", handleRSS);

function handleRSS(rss) {
alert( "Dowloaded: " + rss.title );
}

function getRSS(url, callback) {
feedLoaded = callback;
var script = document.createElement('script');
script.type = 'text/javascript';
script.src = "http://ejohn.org/apps/rss2json/?url=" + url
+ "&callback=feedLoaded&t=" + (new Date()).getTime();
document.getElementsByTagName("head")[0].appendChild(script);
}

Click here to read the full post. The post also contains the back end code as well as the above.

This idea could be used for any XML web service that you would like to have in JSON. So, if you have an application that uses JSON and the data you need is in XML try extending this code to meet your needs.

A JSON Load Object without using Eval

Tagged:  

After writing my article on using JSON in Ajax without using Eval (click here to read the original article) I thought that it would be easier to have an object that could create the JSON with DOM manipulation. So below is the code for the object.

//JSON Dom Code
var jsonload=new Object();
jsonload.CreateObject=function(codeholderid, jsoncode){
	this.codeholderid=codeholderid; //This is the id (as a string) of the element to hold the script code
	this.jsoncode=jsoncode;
	this.loadJSON(codeholderid, jsoncode);
};
jsonload.CreateObject.prototype={
	loadJSON:function(codeholderid, jsoncode){
		var JSONCode=document.createElement("script");
		JSONCode.setAttribute('type', 'text/javascript');
		JSONCode.text = jsoncode;
		document.getElementById(codeholderid).appendChild(JSONCode);	       
	}
};

This code works by passing in the id of the element that should hold the script and the text of the code that should be run (this could also be used to dynamically load code). This code is great if you are going to use an Ajax call that calls content from the same domain (or if you use a server side proxy that tricks it into working). However, if you want to get your JSON from a cross domain you could use the below code.

//JSON Dom Code
var jsonload=new Object();
jsonload.CreateObject=function(codeholderid, jsoncode, url){
	this.codeholderid=codeholderid; //This is the id of the element to hold the script code
	this.jsoncode=jsoncode;
	this.url = url;
	this.loadJSON(codeholderid, jsoncode, url);
};
jsonload.CreateObject.prototype={
	loadJSON:function(codeholderid, jsoncode, url){
		var JSONCode=document.createElement("script");
		JSONCode.setAttribute('type', 'text/javascript');
	        JSONCode.setAttribute("src", url);
		document.getElementById(codeholderid).appendChild(JSONCode);	       
	}
};

In this code you would pass in the same arguments as the first example plus the additional url argument. The url can be to server side code that would create the JSON that could be based on parameters that are passed in through the url.

Below is an example that shows how the first example code would work with an Ajax call.

//Ajax Object Code
var net=new Object();
net.READY_STATE_UNINITIALIZED=0;
net.READY_STATE_LOADING=1;
net.READY_STATE_LOADED=2;
net.READY_STATE_INTERACTIVE=3;
net.READY_STATE_COMPLETE=4;
net.ContentLoader=function(url, onload, onerror, callingobject){
	this.url=url;
	this.req=null;
	this.callingobject=callingobject;
	this.onload=onload;
	this.onerror=(onerror) ? onerror : this.defaultError;
	this.loadXMLDoc(url);
};
net.ContentLoader.prototype={
	loadXMLDoc:function(url){
	        if(window.XMLHttpRequest){
	                this.req=new XMLHttpRequest();
	                if (this.req.overrideMimeType) {
          			 this.req.overrideMimeType('text/xml');
		        }
	        } else if (window.ActiveXObject){
	                try {
           			this.req=new ActiveXObject("Msxml2.XMLHTTP");
			} catch (err) {
				try {
					this.req=new ActiveXObject("Microsoft.XMLHTTP");
			} catch (err) {}
			}
	        }
	        if(this.req){
	                try{
				var loader=this;
	                        this.req.onreadystatechange=function(){
	                                loader.onReadyState.call(loader);
	                        };
	                        var TimeStamp = new Date().getTime();//This fixes a cache problem
	                        if(url.indexOf("?")<0){
		                        this.req.open('GET', url+"?timestamp="+TimeStamp, true);
	                        }else{
	                                this.req.open('GET', url+"×tamp="+TimeStamp, true);
	                        }
	                        this.req.send(null);
	                } catch (err){
	                        this.onerror.call(this);
	                }
	        }
	},
	onReadyState:function(){
	        var req=this.req;
	        var ready=req.readyState;
	        if(ready==net.READY_STATE_COMPLETE){
	                var httpStatus=req.status;
	                if(httpStatus==200||httpStatus===0){
	                        this.onload.call(this);
	                } else {
	                        this.onerror.call(this);
	                }
	        }
	},
	defaultError:function(){
	        alert("error fetching data!" + "\n\nreadyState: "+this.req.readyState + "\nstatus: "+this.req.status+"\nheaders: "+this.req.getAllResponseHeaders());
	}
};

//JSON Dom Code
var jsonload=new Object();
jsonload.CreateObject=function(codeholderid, jsoncode){
	this.codeholderid=codeholderid; //This is the id of the element to hold the script code
	this.jsoncode=jsoncode;
	this.loadJSON(codeholderid, jsoncode);
};
jsonload.CreateObject.prototype={
	loadJSON:function(codeholderid, jsoncode){
		var JSONCode=document.createElement("script");
		JSONCode.setAttribute('type', 'text/javascript');
		JSONCode.text = jsoncode;
		document.getElementById(codeholderid).appendChild(JSONCode);	       
	}
};

function StartJSONLoad(url, callingobject){
	//url is the url of the server side script to get the JSON
	//callingobject is the id of the element that is making the ajax call
	var AJAXConnection = new net.ContentLoader(url, FinishJSONLoad, JSONError, callingobject);
}
function FinishJSONLoad(){
	var codeholderid="DivHolder"; //Where DivHolder is the id of the element where the script should be created.
	var JSONObject = new jsonload.CreateObject(codeholderid, this.req.responseText);
}
function JSONError(){
        alert("There was an issue getting the data.");
}

To use this code you simply have to call the StartJSONLoad function and pass in the URL for the Ajax call (this would go to some server side code that would return the JSON) and the element that is calling the code [you have to use the document.getElementById('idofelement')] which could be the element that will hold the script. In the FinishJSONLoad function you will need to change the DivHolder string to the id of the element that you would like to hold the JSON code.

I hope this code makes it easy to start using JSON in your web applications without using the "evil" eval!

For more information on JSON visit json.org.

Syndicate content