Cool Free Christmas Games


It is the weekend before Christmas (I hope Santa brings me a Ferrari or a Wii!) and it is time to have some Christmas fun and games. So now that you have played Elf Bowling for the 50th time (and that is definitely one of the best Christmas games every made) you may be looking for some new Christmas games.

I found a site that has quite a few great Christmas games (they are all in Flash or Shockwave). I particularly recommend the 3-D snowball fight game, quite a bit of fun (a little tip the ctrl button is used to throw snow balls).

Click here to go to the games!

So, have a very fun and Merry Christmas playing these great games and looking forward to the holiday festivities (I know for me it would be a great New Year if my Fresno State Bulldogs will beat Georgia Tech in their bowl game).

Create Your Own Blog @ Ajaxonomy


That's right, members of can create their own personal blog! Registration is free - just sign up over on the right side of page (or log in with your OpenID). As a registered user you will be able to write and edit posts and have your own RSS feed. If that wasn't enough, your posts even have an opportunity to be promoted to the homepage and main rss feed of the site.

So, sign up today and get posting!

Object Oriented JavaScript - Should You Use It? - Part 3


So we have come to the last of my articles on object oriented JavaScript. You can read the last post by click ing here. This post will be about how JavaScript is proposed to change when (or if as the case may be) ECMAScript 4 (the spec that JavaScript is based on) is supported by the major browsers.

One of the changes that I am most happy about is the new proposed class structure. Yes JavaScript in the near future will be an object oriented language that is more in-line with languages like Java and C#.

Below is how the new structure is proposed.

Classes: A class describes an object by presenting those properties (fields) of the object that are always present (the
fixed properties or fixtures), including variables, constants, and methods16:

class C {
	var val // a variable property
	var large = Infinity // a variable property
	const x = 3.14 // a constant property
	function f(n) { return n+val*2 } // a method property

You would create instances of the class using the new method, just as you currently would with the current JavaScript object.

So, the answer to the question of should you use object oriented JavaScript is a resounding, Yes, especially with the direction that it looks like the language is going. For a more information on ECMAScript 4 check out To go directly to an overview of ECMAScript 4 script click here.

As always if you have any questions you can leave them in the comments or private message me once you login to this site. Also, if you have anything that you would like to blog about on this site all registered users get their own blog on Ajaxonomy and the most interesting posts will be published to the home page.

Making the Most of Java 5: Generics


In the first two articles of this series on Java 5, I explored enums and annotations, the no-brainer enhancements to Java that everyone should use. Now I am going to tackle the more challenging addition to the Java language, generics. In the aftermath of all the changes to the core language that took place in Java 5, generics has emerged as the most problematic, holding significant expressive power but carrying with it a lot of baggage. This baggage has come in the form of many compiler warnings, unexpected compiler errors, and other surprises. Yet, used correctly, generics are very powerful and can actually produce much cleaner code than was possible previously.


The simplest explanation of generics is that they are variables to types (which are bound at compile-time). Quite often in object-oriented programming, there arises a need to represent a "generic" type reference, for which the actual type can be just about anything. A classic example of this are objects in java.util.List: a list can contain any type of object, and so its accessor methods often are represented by an Object return value. This led to the frequent need to cast the return value from these types of methods, a process prone to the dreaded ClassCastException. Generics get rid of the need to do unsafe casting--at the bytecode level, casting is still taking place, but it is guaranteed to be safe by the compiler.

Generics make code clearer. Take for example the following method signature: public List getThings(). What type is contained in the returned List? Without documentation, or code to test it, it would be difficult to tell. With a method like public List<String> getThings(), however, it is clear that the returned structure contains String objects.

Generics enforce type safety, since calling an add("Hello World!") on a List<Integer> produces a compile error. In non-generic code, this would not produce an error.


The origin of generics in Java came from an experimental language that extended Java, GJ (Generic Java). Inspired by templates in the C++ programming language, but with significant differences, GJ was the testing ground for the concepts that ultimately became the generics implementation in Java 5. The process of adding generics to Java was done as a Java Community Process (JCP) as the JSR-014 proposal.

How Generics Work

The key improvement in Java's implementation of generics (vs. C++ templates) is that Java generics are much more type safe, i.e. Java can enforce type relationships whereas C++ could not. This is because generic types in Java are actually compiled into the .class file rather than text-substituted by a pre-compiler. This means that Collection<A> is a different type than Collection<B>(at least at compile-time), and this difference is enforced by the compiler. But in a strange (and some would say cruel) twist of fate, this generic type information is erased at compile-time--a process referred to as type erasure--when the .java file is compiled (though generic information does seem to be kept in the class file metadata area). Type erasure was done to maintain backward compatibility with older Java code, and is the key source of controversy in the generics implementation, as it is the source of many of the problems and corner cases that generics add to the language. Allowing generic types to be kept at runtime (so-called reified generics) is one of the proposals on the table for Java 7.

The Syntax of Generics

Java allows both classes and methods to be genericized. The appropriate declarations are done using angled brackets containing a comma-separated list of type parameters, like <A,B,C> (by convention, generic variables are written as a single capitalized letter, but any valid variable name can be used). Generic declarations occur after the class name in a class declaration but before the return type in a method declaration. Here is an example using both:

public class Structure<T> {
     public T get() { ... }
     public <G> G getDifferent() { ... }

Generics can be nested, like Map<String,Collection<Object>>.

Generics can also include wildcards using the ? symbol. Why is this needed, since generic types are already variables? The answer is because generics are not covariant (unlike arrays). That is to say, Collection<Object> c = new ArrayList<String>() will not compile because c.add(new Integer(1)) is perfectly legal but obviously an Integer is not a String. The problem is solved with wildcards, since Collection<?> can be the supertype of any Collection.

Generics can specify a range of types (bounded wildcards or type variables), either subtypes

     List<? extends Node> or List<T extends Node>

or supertypes

     List<? super Element>

(Note: super can only be used with bounded wildcards.)

Type Tokens

Because of type erasure, it is impossible to (accurately) get the Class of a generic type at runtime. T.class will not compile and calling getClass() on a generic type returns Class<? extends Object>. But sometimes the Class is needed to correctly cast (the cast() method was added to Class in Java 5, and Class itself was genericized) or newInstance() an object of a generic type. This is where passing in the Class as a "type token" comes in. For example,

public <T> T createObject(Class<T> clazz) {
     return clazz.newInstance();

Here you are guaranteed that the return type will be of type T, whereas in J2SE 1.4 and earlier, this was not true. In order to illustrate this point let's look at the equivalent non-generic code:

public MyType createObject(Class clazz) {
     return (MyType) clazz.newInstance();

The cast here is not guaranteed to succeed. If the caller invoked this method using createObject(String.class), a ClassCastException would be raised.

Super Type Tokens

You cannot create type tokens for generic classes because of type erasure, i.e. List<String>.class will not compile. There is a "backdoor" solution to this problem because, actually, not all generic information is erased at runtime. The Class method called getGenericSuperclass() allows the differentiation between, say, between List<String> and List<Integer> at runtime using the "super type token" concept. It works like this:

public abstract class TypeReference<T> {
     private final Type type;
     protected TypeReference() {
          Type superclass = getClass().getGenericSuperclass();
          this.type = ((ParameterizedType) superclass)
     // equals() and hashCode() base on Type

A "super type token" would be created using an empty subclass of TypeReference, like

new TypeReference<List<String>>() {}

This would be differentiable from a TypeReference<List<Integer>> at runtime. For a more thorough implementation, see the TypeLiteral class from Google Guice.


There are many corner cases and *gotchas* in generics. Here are a few of my favorites.

  • Generics can make your code very verbose. The classic example of this is the declaration and initialization of a Map:
    Map<String,Collection<Number>> map =
        = new HashMap<String,Collection<Number>>();
  • Too many generic declarations and/or overuse of bounded types can make code difficult to read.
  • The static getClass() method on any Java object returns a Class<? extends $type>, not Class<$type> (where $type is the type on which the method is called). This means, for example, the code Class<String> c = "Hello World!".getClass() does not compile because Class<? extends String> is not assignable to Class<String>. This tends to force you into using type tokens, often having to add Class<T> to method signatures where you would not have done so otherwise.
  • Wildcards and type variables don't mix. The generic type Collection<? extends Number> is not the same as Collection<T extends Number>, so you must be careful in how use the two, especially in method signatures.
  • Arrays and generics don't mix. The general advice here is to use generic Collections rather than arrays.
  • Sometimes, because of generics, you will have to do an unsafe cast from one generic type to another. It will happen.
  • @SuppressWarnings is, if not your best friend, at least a close companion (but try to use it only if you can prove that what you are doing is safe).


Generics are a powerful and, by and large, welcome feature in Java 5. Using them does improve your code. But type erasure opens up a big hole in the Java type system that is created between the bytecode compiler and the runtime compiler. This could be solved by the reification of generics, but reification breaks backward compatibility with Java classes written against pre-Java 5 compilers. A suggested solution has been to introduce optionally reified generics (using something like List<@String> syntax), but I think that having two different types of generics may traumatize the Java type system (and Java programmers) even further. A better decision might be to simply break backward compatibility at some point and let programmers decide if they need reification enough to forgo compatibility.


The Generics Tutorial, Gilad Bracha.
Effective Java Reloaded (JavaOne), Joshua Bloch.
Super Type Tokens, Neal Gafter
Limitations of Super Type Tokens, Neal Gafter
Java Generics FAQ, Angelika Langer.

Yahoo! Maps gives Flash the Boot!


Straight from the Yahoo Developer Network, Jason Levitt informs us that Yahoo Maps is now using pure JavaScript instead of a hybrid of Flash and JavaScript. If that wasn't enough, he also informs us that a new version of the Maps AJAX API will be available next year! Read the full post below:

You'd hardly know it by going to, but Yahoo! Maps are now pure JavaScript instead of a hybrid of Flash and JavaScript. Lead Maps Developer Mirek Grymuza and the Maps team have done an amazing job of seamlessly moving the Maps client over resulting in at least double the performance of the previous Flash-based version.

The good news for developers is that the new Yahoo! Maps client uses an enhanced version of our Maps AJAX API which will be available to developers in 2008. This substantial upgrade of the Maps API will provide access to all the overlay components available in the consumer client and will give developers significant overlay flexibility. It's going to be a great new year for Yahoo! Maps users and developers.

Read the original post at the Yahoo Developer Network here.

Object Oriented JavaScript - Should You Use It? - Part 2

This is the continuation of a post that I wrote yesterday (click here to read the original post). This post will go into much more depth and will be a bit more technical.

Now to begin to answer the question of should you use object oriented JavaScript (don't worry I will touch on the fact that we have all already used JavaScript objects). The first thing that we should understand is how it is used and what are the advantages or disadvantages. If you have been programming in a language like C++ or Java then you are use to a class style of object oriented. JavaScript does not use this structure, instead an object in JavaScript is based on functions and properties.

JavaScript object oriented programming can be noted in different styles.

The first type of notation is to use the new operator along with the Object() method.

person = new Object() = "John Doe"
person.height = "6Ft" = function() {
	this.state = "running"
	this.speed = "4ms^-1"

In the above code we define an object named person and then add it's own properties. Also the property will execute the function.

The next notation will be familiar if you have ever used JSON. The below notation is referred to as literal notation. This notation simplifies things a bit and this is much better for sending over the web in an Ajax application where such things really matter.

var rectangle = { 
	upperLeft : { x : 2, y : 2 },
	lowerRight : { x : 4, y : 4},
	method1 : function(){alert("Method had been called" + this.upperLeft.x)}

The shortcoming with this method is that it does not lend as well to re-usability.

So far you are probably thinking that this is a waste of time and that why would I ever use this, outside of JSON. Well, now we will start to see the power of this coding style which is in re-usability.

The below example will create an object and will set the value of the name property.

function cat(name) { = name; = function() {
		alert( + " say meeow!" )

cat1 = new cat("felix") //alerts "felix says meeow!"

cat2 = new cat("ginger")

The above code shows how you can easily create multiple objects based on the same object. Which brings me to how you have already used object oriented JavaScript. For example the document.getElementById() method is an example of an object that you more than likely have used (don't worry Prototype library lovers, I'll touch on $() in just a moment).

One of the great things with objects is that using prototype (if you have been programming in ActionScript you will be familiar with the below) you can now extend the functionality of an existing object in a new object.

The below code is an example of how to use prototype to extend an object.

cat.prototype.changeName = function(name) { = name;

firstCat = new cat("pursur")
firstCat.changeName("Bill") //alerts "Bill says meeow!"

Using prototype you can extend existing JavaScript objects, such as the date (I believe that this is how the prototype library creates the $() method which is in essence the document.getElementById() method) object.

The below example shows how to extend array object with the shift and unshift methods that are not available in some browsers.

if(!Array.prototype.shift) { // if this method does not exist..

	Array.prototype.shift = function(){
		firstElement = this[0];
		this.length = Math.max(this.length-1,0);
		return firstElement;

if(!Array.prototype.unshift) { // if this method does not exist..
	Array.prototype.unshift = function(){
			for(var i=arguments.length-1;i>=0;i--){
		return this.length

If you have been programming in a language link C++ or Java you are probably very familiar with a class structure. Part of this is the idea of a class and a subclass. While most JavaScript object oriented code will not use the following, below is an example of how you can create classes and subclasses in JavaScript.

function superClass() {
  this.supertest = superTest; //attach method superTest

function subClass() {
  this.inheritFrom = superClass;
  this.subtest = subTest; //attach method subTest

function superTest() {
  return "superTest";
function subTest() {
  return "subTest";

 var newClass = new subClass();

  alert(newClass.subtest()); // yields "subTest"
  alert(newClass.supertest()); // yields "superTest"

So, now that you have seen how you can extend and reuse JavaScript objects it still leaves us with the question of should you use it. If you need to create code that needs to be reused or extended then it should be a JavaScript object. If you are just writing code that does a few lines of code that doesn't need to be reused or extended then procedural (a.k.a. typical function style code) style code is fine to use. So, the answer to the question is yes in many cases although there are times when it may be overkill.

The next post will take a look at how JavaScript will soon be changing once (or some would say if) ECMAScript 4 starts getting major browser support.

So, as always if you have any questions leave a comment or you can private message me once you add me to your buddy list (I will add you as a buddy as soon as I see the request) that is available once you login. Also, if you would like to write anything on this blog you can do so once you login by clicking on create content and then blog entry. The most interesting posts will be promoted to the home page.

many of the above code samples where taken from this post on JavaScript Kit.

Object Oriented JavaScript - Should You use It? - Part 1


This post was part one of a three part series. You can read post two by clicking here and post three by clicking here.

If you have programmed any in JavaScript you are definitely familiar with the procedural method of coding, but you may not have seen many examples of object oriented JavaScript code. Since many JavaScript coders come from a scripting background, most practice a procedural programming style. However, if you have a C++, Java, C#, or any other object oriented programming language (you can program in an object oriented manner in languages such as PHP, however it is often not practiced) you will be interested in seeing how JavaScript object oriented programming differs from these coding languages.

Below is a simple example of a procedural coding style (this particular example just changes the name value of an anchor tag).

function RenameAnchor(anchorname, anchorid){
	document.getElementById(anchorid).name = anchorname;

We have all used this coding style many times in our JavaScript. One problem with this is that it will often cause you to copy and paste portions of code into other sections to extend them in different functions. While there is nothing necessarily wrong with this style in some instances there is a better way of coding.

Below is a simple example of an object oriented coding style (once again this particular example just changes the name value of an anchor tag).

var AnchorRename=new Object();
AnchorRename.CreateObject=function(anchorname, anchorid){
	this.anchorid=anchorid; //This is the id of the anchor element
	this.RenameAnchor(anchorname, anchorholerid);
	RenameAnchor:function(anchorname, anchorid){
		document.getElementById(anchorid).name = anchorname;

While the above may at first look like it is just more complicated there are actually practical reasons for using this coding style. The most notible is re-usability. With an object you would create a new object and attach it to a variable name. You would create the new object as in the below code example.

var NewObject = new AnchorRename.RenameAnchor(variable1, variable2);

Now that you have a variable that is attached to the object you can access the object and its properties by referring to the variable, so you can re-use it much easier. Another nice feature of having this in an object is that it can be prototyped in new objects and extend the functionality of the object. While all of this is a little more in depth then I have time for right now I would be happy to answer any questions left in the comments of this post (or you can add me to your buddy list and private message me).

If you come from a C++ or Java background you will probably notice that this differs quite a bit from the object oriented style that you are used to. You will be happy to know that it looks like the next version of JavaScript should be more in-line to what you are used to. I'll be posting more on the new version of JavaScript when we are about to see major implementations of it.

Well, I know I haven't gone into much detail yet, but in my next two posts I will go over how exactly to code in an object oriented style (including different methods to accomplish this) and the how coding might change once ECMAScript 4 is fully adopted by the browsers. For now you can find a good tutorial regarding object oriented programming in JavaScript by clicking here.

Sparkline PHP Graphing Library


If you are creating an application or widget that deals with complex data such as a stock ticker you probably want to graph the data. Sparklines are a great way to graph such data in a small space. In fact it is amazing how much information you can get by just a quick look at a Sparkline.

Below is an example of a Sparkline from


Back in the dot-com heyday, you might remember that Cisco, EMC, Sun, and Oracle were nicknamed "the four horsemen of the Internet". You might now say: "companies that ride together, slide together." Large 10-year charts of these four stocks.

  5-Year Close High Low
Cisco 19.55 80.06 8.60
EMC 13.05 101.05 3.83
Sun 5.09 64.32 2.42
Oracle 13.01 43.31 7.32

The below code example is how you make a simple bar graph Sparkline.

 // Set the spacing between the bars
 // Add a color called "mygray" to the available color list
 $graph->setColorHtml("mygray", "#424542");
 // Set the background color to this new color
 // As of 0.2, if you didn't set the color beforehand,
 // your background would be black
 // Begin the loop to add the data
 foreach ($data as $key => $value) {
 // Add the data
 // This will make one white bar with the relative value
 // of $value
 $graph->setData($key, $value, 'white');
 // Draw all necessary objects for our graph
 // The height will be 16
 // Displays the graph by sending a 'Content-type: image/png'
 // header then outputting the image data

You can get more information about the Sparklines PHP Graphing library at

Now that you have the tools to create a Sparkline go make a cool new application or widget that use Sparklines.

Debugging Data Issues in Ajax Applications


One of the things that changes when developing when we develop Ajax applications is the visibility of the data being received by the application. In order to get visibility of what is happening on the back end you will probably want to get a good Traffic Sniffer. This will allow you some visibility on what the XMLHttpRequest object is doing.

If you already use Greasemonkey you will probably want to check out the XMLHttpRequest Tracing and XMLHttpRequest Debugging extensions. The XMLHttpRequest tracing extension allows you to unobtrusively log traffic to the JavaScript console and the XMLHttpRequest Debugging extension is a powerful interactive tool that not only shows the messages, but also lets you set filters and configure the display. As with Greasemonkey these extensions are open source.

Fiddler is a Windows proxy specifically designed for analyzing browser service traffic. Fiddler is free from Eric Lawrence and Microsoft.

Another great tool is Live HTTP Headers which is a Firefox extension that reveals in formation about the HTTP headers. The extension will add features to existing menus on Firefox to allow you to get more information about the HTTP headers. For example it will add a "Headers" tab in the "View Page Info" of a web page.

While these are not the only tools for doing this, these are a few good tools. If you know of any other good tools for Traffic Sniffing please leave it in the comments so that the community can know about it. Now go out there and find any issues with the data coming into your application through the XMLHttpRequest object.

Making the Most of Java 5: Annotations


In this second in a series of articles on the new Java 5 language features, we'll cover annotations. Annotations are used to represent metadata in the Java language, a feature which comes in particularly handy when designing software frameworks, where some type of metadata is often needed to describe the interaction of client software with the container or framework in which it is running. In such a role, they often come into direct competition with the more traditional way of describing metadata with Java, XML.

First, an introduction.

Syntax and Usage

A new symbol is introduced into Java to handle annotations: the '@' character. It is used both in defining and declaring annotations. When defining an annotation, you use a new keyword, @interface. For example,

public @interface MyAnnotation {
     public String name();
     //Let's assume they are clever...
     public boolean clever() default true;

As the keyword implies, annotations are a special type of interface. Here we have defined a new annotation with two properties, 'name' and 'clever'. The 'clever' property has a default value, which is true. This will be supplied if the annotation declaration does not provide a value for this property. The property (or method return types) defined by an annotation are restricted to: primitives, String, Class, annotations, enums, and arrays of these types. Note that there are two basic meta-annotations (annotations on annotations) supplied by Java: @Retention, which specifies at what stages the annotation information is kept, and @Target, which defines on what Java elements an annotation can be used. @Retention takes an enum, RetentionPolicy, that has the values:

CLASS - the annotation is retained in Java .class files but discarded at runtime
SOURCE - the annotation is discarded by the compiler
RUNTIME - the annotation is retained in the Java .class files and kept at runtime

@Target takes an array of an enum, ElementType, which specifies on which Java language elements the annotation can be used. These are:

ANNOTATION_TYPE - an annotation definition
CONSTRUCTOR - a Java type constructor declaration
FIELD - A class-level variable
LOCAL_VARIABLE - A variable declaration local to a block (e.g., a method)
METHOD - A class method
PACKAGE - A package declaration
PARAMETER - A parameter type declared by a Java method
TYPE - a Java class type definition

An annotation can then be used in Java source, like this:

public class MyClass {
     public void doSomething() {

Since this is defined as a RUNTIME annotation, it can be reflectively read from the class at runtime. An example might look like this:

public void readAnnotation(Class clazz) {
    MyAnnotation a = clazz.getMethod("doSomething")
    System.out.println( + " is clever? " + a.clever());


Annotations are both very simple and, at the same time, powerful. They have redefined the way that many traditionally XML-based frameworks (EJB 3.0, JAXB, Spring) interact with client code. So what are the advantages or disadvantages of each?

1. XML is more verbose than annotations. Typically, in an XML metadata file, a substantial amount of the structure is dedicated to describing the Java object graph and how the metadata affects it. And verbosity is the natural enemy of readability...
2. XML is less type-safe. It is easy, and common, to mis-type a fully-qualified Java type in an XML file. Most of the time the framework does not check that the type is valid, and will wind up throwing a less-informative error somewhere down the line. Since it is much easier to know the type associated with an annotation (via reflection), this is not a problem. For the same reasons, annotations are much more "tool-friendly".
3. XML files can be more centralized, since they are not co-located with the source. This can be an advantage in certain situations where it is more natural to have the metadata in one place.
4. Annotations are faster. Parsing XML is slower than reflecting annotations, though by how much depends a lot on the situation. If this performance is merely a "startup cost" it may not be relevant at all.
5. Annotations are harder to use with third-party libraries. If you have third-party libraries, and you want to use metadata with those classes, you are out of luck unless the library is open-source. With XML, this is not a problem, because it is not as intrusive.

Some have stated that having annotations on Java source introduces coupling between the source and the metadata. This is not true. Annotations are entirely optional, and need not be read at all. The only potential snag is that having annotations on your source rather than using XML does introduce a stronger dependency on the library (-ies) that contain the annotations, since these must be on your classpath for compilation. But for many, this is not a serious issue.


In general, most framework authors have found that the benefits of annotations outweigh the downsides, and POJOs (Plain Old Java Objects) + Annotations have become the new paradigm in creating Java standards. This is not to say that XML has become obsolete by any means. XML has its place, and always will. But Java developers are certainly relieved by the fact that it is not the only option anymore. And framework writers are relieved that they do not need to parse XML just to get client metadata.

Syndicate content