Web 2.0

Auto Resize iFrames - Part 1

Tagged:  

At times you may need to load content from other domains or websites into a web application that you are creating. One of the most common ways to accomplish this is to use an iFrame to load the content (this is assuming that you just want to display a web page in your application). In some cases you will want to load content and have the iFrame re-size based on the content that is being loaded into the iFrame.

The interesting thing is that there is no easy way to do this. You can't set the iFrame's width and height properties to 100% as this will have the iFrame take over the screen and not just re-size based on content. The solution that I came up with is the use of a server side proxy (which gets around cross-domain issues) and some javascript that is added from the server side proxy code to re-size the iFrame.

This first post will give you an idea on how this would be accomplished. In my next post I'll talk about the code that would be used and discuss the pros and cons (as well as other methods that could be used to accomplish the same effect). So make sure to check your feed reader for the next post...

Create Web 2.0 Progress Bars: jQuery, DHTML, JS, CSS, Photoshop

Tagged:  

Progress bars are extremely usefull in Ajax applications as it lets people know when you are loading information in the background. Well the DeveloperFox blog has put together a nice list of progress bars using jQuery, JavaScript, CSS and PhotoShop.

Below is an excerpt from the post.

jQuery Progress Bars

jQuery.UI ProgressBar Widget

1

HOWTO: PHP and jQuery upload progress bar

2

jqUploader

“jqUploader is a jquery plugin that substitutes html file input fields with a flash-based file upload widget, allowing to display a progressbar and percentage. It is designed to take most of its customization from the html code of your form directly - so you don’t have to do things twice . For instance, the maximum file size, if specified via html, will be recognized and used in the rich upload interface generated by jqUploader.

The plugin uses the form action attribute value (normally pointing to a server side script handling the upload and other data manipulations, and appends a variable “jqUploader=1? so that the upload code is initiated when it sees that key/value is on the posted data.”

3

Progress Bar Plugin

This is a progress bar plugin for jQuery.

4

To read the full post click here.

Great Accordion Scripts

A very useful design element on the modern web is the accordion element. With this useful design element you can never have enough accordion scripts to use. Well, WebTecker has put together a nice list of accordion scripts.

Below is an excerpt from the post.

  • jQuery Horizontal Accordion - This is another jQuery Plugin but the accordion is horizontal. It is very similar to the XBOX 360 interface. This plugin requires you use the interface plugin.

  • MooTools Accordion - This MooTools Accordion script is very nice script that is very easy to implement. There is no additional plugins that you need to get this script to work. The one problem with this is that there is no support for this script. But you can easily figure out how to integrate by viewing the source code.

  • Horizontal JavaScript Accordion - This script requires no JavaScript frameworks and is just 1kb. It has been tested in all major browsers. This is a great an easy script to implement.

  • Accordion v2.0 - This accordion script is built with Prototype and Scriptaculous. This script handles both horizontal and vertical accordions. It can even have an accordion inside an accordion. You should check this out.

You can read the full list here.

This is a great list and many of these scripts could be useful if you want a different accordion script.

Is Digg Spy Broken?

Tagged:  

Digg Spy is the real time tracker that shows what is happening on the social network Digg as it happens. The tool is a great way to follow Digg in a different way than the normal view.

We at Ajaxonomy.com have written a tool called the Digg Bury Recorder that uses a feed from Digg Spy to capture all of the buries of stories that are recorded by Digg Spy. Recently it was brought to my attention that the Digg Bury Recorder does not seem to be recorder new buries. In looking into the issue I found that the feed is not returning any data.

Thinking that perhaps Digg changed the feed (they have done this before) I went over to Digg Spy to check if the feed that is being used and the feed that we are using is correct. However, after leaving Digg Spy up for a moment I quickly realized that the Ajax portion of Digg Spy does not appear to be working (go to the page and it will look like it is working, but once it gets past the pre-loaded information it no longer works).

Without the Ajax portion of Digg Spy the application is useless and I wonder if I wonder if Digg knows of the problem or if they purposely broke the application (perhaps in an attempt to stop applications like the Digg Bury Recorder, although this doesn't really make sense).

Google Health Data API

Tagged:  

Google recently released Google Health. The application makes it easy for users to keep track of their medical records. Along with releasing Google Health they have also released the Google Health Data API.

Below is an excerpt about the Google Health Data API.

The Google Health Data API allows client applications to view and send Health content in the form of Google Data API feeds. Your client application can use the Health Data API to create new medical records, request a list of medical records and query for medical records that match particular criteria.

Here are some of the things you can do with the Health Data API:

You can learn more about Google's Health Data API here.

I can see a lot of uses of this API for medical related mash-ups.

Title Capitalization in JavaScript

John Resig has written a good JavaScript port of the excellent Perl script written by John Gruber that provides pretty capitalization of titles. It is amazing how well the script works for capitalizing words.

Below is an excerpt from the post.

The excellent John Gruber recently released a Perl script which is capable of providing pretty capitalization of titles (generally most useful for posting links or blog posts).

The code handles a number of edge cases, as outlined by Gruber:

  • It knows about small words that should not be capitalized. Not all style guides use the same list of words — for example, many lowercase with, but I do not. The list of words is easily modified to suit your own taste/rules: "a an and as at but by en for if in of on or the to v[.]? via vs[.]?" (The only trickery here is that “v” and “vs” include optional dots, expressed in regex syntax.)
  • The script assumes that words with capitalized letters other than the first character are already correctly capitalized. This means it will leave a word like “iTunes” alone, rather than mangling it into “ITunes” or, worse, “Itunes”.
  • It also skips over any words with line dots; “example.com” and “del.icio.us” will remain lowercase.
  • It has hard-coded hacks specifically to deal with odd cases I’ve run into, like “AT&T” and “Q&A”, both of which contain small words (at and a) which normally should be lowercase.
  • The first and last word of the title are always capitalized, so input such as “Nothing to be afraid of” will be turned into “Nothing to Be Afraid Of”.
  • A small word after a colon will be capitalized.

He goes on to provide a full list of edge cases that this script handles.

You can read the full post here.

I can see some uses for this script in applications that would like to take user input and make it into a properly formatted title. This could be handy in a del.icio.us or digg like application.

Google Friend Connect

Tagged:  

Later today Google will be launching a new service called Friend Connect. Read Write Web has written an interesting post about the new service raising some concerns.

Below is an excerpt from the post.

You Can't Use it Yet

While the whole developer and publisher world is anxiously awaiting details from the launch tonight, Google is putting a damper on adoption by limiting the Friend Connect "preview release" to a handful of white listed apps and a short list of selected websites. The company says it has to prove it can scale the infrastructure (ooh, can Google scale? I don't know, better limit the approved users to just a tiny handful!) and it wants to see what kinds of features developers and site owners want to request. Apparently the company believes this feedback is best done by making said parties look from the outside and send emails guessing about what they'd like to see once they are let inside. This seems completely backwards to me.

You Can't Touch What's Inside the Magic Box

Site owners will be able to add Open Social apps to their web pages - sort of. They'll be able to display them inside an iframe, a separate web page inside a webpage. They won't be able to leverage that user data to change what they deliver themselves to their users.

Apps in an iframe may as well be a social sidebar ala Flock or Yoono. Those collections of social apps are probably more useful anyway.

Conversations Are Complicated

Google made it clear during their press call that they are aiming for the easiest, simplest and safest way to enable social apps to be integrated into other websites. It will take less than six months, they promise.

Let's be clear that it's not going to be easy to figure out how to enable all this user data to be mashed up in acceptably safe ways. We asked Google how they can assume that one user's friends on IMeem have permission to access their info out on other sites around the web. They said that users will have to be given the option whether to expose that info to third party sites or not, something we haven't seen any details on yet from the original source social networks. That would be even more difficult if the destination sites had read, much less write, access to that ported-in social networking data.

You can read the full post here.

While the post raises some interesting concerns, I think that the service may turn out better than the post states. While I think the initial release of the service may have some issues, Google has a proven track record that bodes well for the service over time. We can only wait and see what happens.

Ajaxonomy Launches Labs Page

Tagged:  

You have probably seen our Ajaxonomy Labs section on the right hand portion of this blog. Well since we just launched our fourth application for the labs section we have decided that it is time for the labs section to have its own page.

You can get to the page from the Labs link on the top navigation bar of this site. You will still be able to get to the four newest applications in the Ajaxonomy Labs section on the right hand portion of the page. However, all of the applications that we make will be available on the Ajaxonomy Labs page.

We would love to hear your thoughts on the page or on any of the applications. So, take a few minutes and check out the Ajaxonomy Labs page and leave your thoughts in the comments.

BuddyBlend Beta

Tagged:  

Today we are releasing the latest application for Ajaxonomy Labs. The applications is called BuddyBlend. BuddyBlend is a way to easily get all of your friends activities in one location without logging into a bunch of different Social Networks.

You may be thinking that FriendFeed already does this and you are correct to a point. The big difference between FriendFeed and BuddyBlend is that with BuddyBlend you don't need to make friends with a new set of people to see what they are doing, but it gets the latest updates from the friends you already have on various social networks. Because I consider FriendFeed as a type of social network you can get your FriendFeed friends information on your BuddyBlend.

BuddyBlend is currently in public beta, so we will continue to improve on the application to make it more useful and work with more social networks.

Below are the social services that are currently available on BuddyBlend.

  • Digg
  • del.icio.us
  • Twitter
  • flickr
  • FriendFeed
  • FaceBook

BuddyBlend uses your free Ajaxonomy account, so if you already have an account there is no need to sign-up for a new account. As we continue to improve the application we welcome your feedback, so please let us know how we can improve the application.

Streaming Server Side Proxy in .NET

If you want to get XML data from a different domain you will need to use a server side proxy. While you could use JSON to get around this there are times when a JSON feed may not be available and a proxy must be used.

In the past I have written posts about creating such a proxy, however all of the example code was is PHP. Well, if you are a .NET developer then you are in luck, because the CodeProject has posted a tutorial on creating a Server Side Proxy in .NET.

Below is an excerpt from the post.

A Basic Proxy Such a content proxy is also available in my open source Ajax Web Portal Dropthings.com. You can see from its code from CodePlex how such a proxy is implemented. The following is a very simple synchronous, non-streaming, blocking Proxy:

[WebMethod]
[ScriptMethod(UseHttpGet=true)]

public string GetString(string url)
{
        using (WebClient client = new WebClient())
        {
            string response = client.DownloadString(url);
            return response;
        }
    }
}   
 

Although it shows the general principle, but it's no where close to a real proxy because:

  • It's a synchronous proxy and thus not scalable. Every call to this
    web method causes the ASP.NET thread to wait until the call to the
    external URL completes.
  • It's non streaming. It first downloads the entire
    content on the server, storing it in a string and then uploading that
    entire content to the browser. If you pass MSDN feed URL,
    it will download that gigantic 220 KB RSS XML on the server and store
    it on a 220 KB long string (actually double the size as .NET strings
    are all Unicode string) and then write 220 KB to ASP.NET Response
    buffer, consuming another 220 KB UTF8 byte array in memory. Then that
    220 KB will be passed to IIS in chunks so that it can transmit it to
    the browser.
  • It does not produce proper response header to cache the response on the server. Nor does it deliver important headers like Content-Type from the source.
  • If external URL is providing gzipped content, it decompresses
    the content into a string representation and thus wastes server memory.
  • It does not cache the content on the server. So, repeated
    call to the same external URL within the same second or minute will
    download content from the external URL and thus waste bandwidth on your
    server.

So, we need an asynchronous streaming proxy that transmits
the content to the browser while it downloads from the external domain
server. So, it will download bytes from external URL in small chunks
and immediately transmit that to the browser. As a result, browser will
see a continuous transmission of bytes right after calling the web
service. There will be no delay while the content is fully downloaded
on the server.

You can read the full tutorial here.

This post not only shows you haw to make a basic server side proxy, but also shows you how to make it streaming. If you are a .NET developer looking to make a server side proxy, I recommend this post.

Syndicate content