Plugins: usage, distribution and future in Firefox

Over the last few years, the Web has come a long way in providing web-native solutions to technical challenges that previously required authors to use external plugins. This has been and still is a vast undertaking, but there are a number of compelling reasons to ensure that authors have the tools they need to create the content they want, without relying on plugins. Chief among them are security and device limitations. Plugins have long been a major attack vector when it comes to browser exploits, and as more and more users access the web through mobile devices where plugins simply aren’t supported, web-native content becomes less and less the fallback solution, and more and more the primary one.

Statistics and where we get them

When users click “Install plugin…” in the Firefox UI, our Plugin Finder Service (PFS) tries to find a relevant plugin for the mime-type and operating system, and depending on what it finds, it’ll either offer to launch an installer or link to a manual install URL. installPluginThis allows us examine which plugins users are trying to install, and whether or not we’re doing a good job of helping them. I should note that we don’t log IP addresses, nor referring URL, only mime-type, client OS, Firefox version, Firefox language, and some IP based geolocation information.

It appears this happens about 1.1 million times per day.

Counting unique mime-types seen over a 7 day period, reveals that the web still contains a surprising variety of plugin content; 3542 to be exact. Even after correcting obvious errors and aggregating certain mime-types into groups when more than one can identify a specific plugin (such as Java, for example), we’re left with 2378 different types. Thankfully, most of them have a very low showing, and with further error pruning it’s likely closer to 2000. Plugin Finder Service however, handles only 18 of those – and most only for certain OS’s.

Global top 20

These are the mime-types that users are trying to install plugins for, and even within the top 10 there are at least 3 surprises. Note: I use brackets to indicate that it’s a group of mime-types.

  1. [Flash] (64.57%)
  2. [Java] (9.59%)
  3. [Windows Media] (4.50%)
  4. application/x-director (3.97%)
  5. [QuickTime] (3.81%)
  6. application/octet-stream (3.34%)
  7. text/html (2.49%)
  8. application/x-vlc-plugin (0.81%)
  9. application/pdf (0.55%)
  10. text/plain (0.46%)
  11. application/vnd.unity (0.43%)
  12. application/qvod-plugin (0.37%)
  13. application/gas-ibh-abn (0.37%)
  14. [Zylom Games] (0.23%)
  15. [Silverlight] (0.20%)
  16. [Skype] (0.20%)
  17. video/divx (0.13%)
  18. audio/x-mpegurl (0.13%)
  19. application/x-hardwaredetection-plugin (0.13%)
  20. application/mozilla-wat-scriptable-plugin-11 (0.12%)

What can we learn from that list?

There are a few things we can say for sure just looking at that list.

  • application/octet-stream, text/html and text/plain should not show up here – we’re clearly presenting the user with the UI at times when we shouldn’t. It appears that text/html is probably a result of embeds trying to load a resource that returns some sort of error page.
  • There are clearly only a select few we really need to help users install. Either because authors are providing their own UI to install the relevant plugin, or because the plugin isn’t widely used enough to warrant attention.

Considering the rapid falloff in usage, what percentage of the requests do we actually successfully handle? I don’t have the exact numbers because the scrubbed version of the data I’ve been working with doesn’t contain OS information, and as mentioned earlier that influences how and whether we can help users install a plugin. A somewhat artificially inflated number then comes out to 86.3%, which is much better than I had feared.

Plugin Distribution

Different markets have different habits, so looking at the global stats isn’t enough to gain insight into how the top 3 plugins usages were distributed across the globe. Here is a gradient map showing the percentage of PFS hits that were for Flash per country

flashDistributionI’ve generated some interactive maps for Flash, Java and Windows Media where you can get the actual numbers by hovering the desired country. Do note that for sufficiently small countries there is very little data, so those numbers should be taken with a grain of salt.

The missing plugin UI

Currently, when a user loads a page that includes a plugin they don’t have installed, we show an in-content overlay in its place (if it’s large enough), and a notification bar under the Awesome Bar. The notification bar in particular is really obtrusive, especially for people who either can’t or won’t install a specific plugin.

Both of these problems will be addressed by the work I’ve done to make our UI more reasonable, and removing PFS as a service. The work will replace the notification bar with a door hanger (like the password manager), which will only be open by default for Flash. For Java, Shockwave and QuickTime, only the door hanger anchor (the icon in the Awesome Bar) will be displayed. For all other plugin types, we will at most display the in-content UI – following the existing rules for trying to use the site’s fallback content, etc.

To ensure that people who can’t or won’t install Flash don’t keep getting a door hanger, it will include an option to not open it by default again, after which it will have the same behavior as it does for Java and the other aforementioned plugins.

In the current implementation, we quite often prompt the user to install a plugin, and after guiding them through an install wizard, tell them that we couldn’t help. With these changes, that ought not to happen anymore. We only ever offer to install plugins we already know we have some way of handling.

Plugin updates

When a plugin author like Adobe updates their plugins, they usually file a bug ahead of time to update our PFS with the correct file hash and download location. However, if this process takes longer than expected, we might end up temporarily serving a plugin known to be insecure, or one that will fail the hash check after download – both not exactly ideal situations. After removing PFS as a service, we allow for the possibility of letting companies like Adobe to host their own installer information, making it much easier for them to push updates without delay. We still retain a system for blocking specific plugins even when the installer information is provided directly by a third party.

Next step(s)

Hopefully, I (or someone else) gets to remove the wizard dialog, which seems just as out of place now as it did before. As a start, I think I’ll push for a change that will trigger it only when we have an actual installer to run, which is only Flash at the moment.

Counting dots

In a recent article by Jared Wein, Unnecessary Regular Expressions, the topic of counting the occurrence of a specific character in a string was brought up. While the main point of the article was readability, performance was also mentioned and I took it as a challenge to find the most efficient method possible.

After experimenting with different algorithms, it’s interesting to note how different the performance is across different browsers and different versions of the same browser. Most notably, Chrome which is usually held in high regard when it comes to javascript performance, doesn’t fare well in most of the tests – and when it does, it’s the result of a cache that doesn’t immediately lend itself to much other use than acing benchmarks. It’s currently being investigated whether or not this is something that offers any real world benefit in bug 688219.

Algorithms

Without further ado, here are the algorithms tested:

regex_char_counting
var count = s.replace(/[^.]*/g, "").length;
iter_char_counting
var j, l = s.length, count = 0;
for (j = 0; j < l; j++) {
  if (s[j] == '.')
    count++;
}
iter_char_counting_nc
var j, l = s.length, count = 0;
for (j = 0; j < l; j++) {
  if (s[j] === '.')
    count++;
}
iter_char_counting_pi
var j, l = s.length, count = 0;
for (j = 0; j < l; ++j) {
  if (s[j] == '.')
    ++count;
}
indexOf
var pos = 0, count = -1, i = -1;
while (pos != -1) {
  pos = s.indexOf(".", i + 1);
  count++;
  i = pos;
}
split
var count = s.split(".").length-1
fast_for_loop
for (var i=s.length, count = 0; i--;) if (s[i]==='.') count++;
lastIndexOf
var count=0, idx=s.length;
while ((idx=s.lastIndexOf(".", idx - 1)) > -1) count++;
string.match
var count = s.match(/([.])/g).length
fast_for_loop_var
for (var i=s.length, count = 0; i--;) count+=s[i]==='.';
indexOf_var
var count=0, pos=-1;
while((pos=s.indexOf(".", pos+1))!=-1) ++count;
indexOf_var2
var count=0, pos=-1;
while((pos=s.indexOf(".", pos+1))!==-1) ++count;
indexOf_var3
var count=0, pos=-1;
while((pos=s.indexOf(".", pos+1))>-1) ++count;

You can run the tests yourself and see how the different algorithms stack up on your particular system, by going to http://jsperf.com/character-counting/8

Results

Here I’ve compiled the results of running the tests in the browsers I had available on my Kubuntu 12.04 machine, using clean profiles for each browser.

Graph of performance across algorithms and browsers

In the graph, the effect of the String.prototype.split() cache in Chrome can be clearly seen. To make absolutely sure that the difference is indeed the result of Chrome caching the results, we can write a test like http://jsperf.com/string-split-cache, where we test two different algorithms where s1 and s2 are both strings with the same value:

cache
for (var i=0; i<10; i++) {
  s2 = s2.substr(1)+s2[0];
  var split = s1.split(" ");
}
non-cache
for (var i=0; i<10; i++) {
  s1 = s1.substr(1)+s1[0];
  var split = s1.split(" ");
}

If there is a marked difference in the performance of these two, then the result must be cached in the first. As can be seen by the following graph, Firefox 18 performs almost exactly the same in both cases, whereas Chromium 20 reports vastly different results for the two

This means that even if your target is solely Chrome users, using an algorithm that depends on String.split() only makes sense if you’re splitting the same string over and over gain.

Returning to the problem of counting dots, if you look at the development from Firefox 15 to 18 (current nightly), it’s clear that overall there is a noticeable speedup, yet in many cases there has also been some minor regressions. To get the bottom of these, we can look at what happens when we disable IonMonkey [IM] – the new JIT for SpiderMonkey – which was introduced in Firefox 18. This can be done by toggling javascript.options.ion.content to false in about:config.

Graph of performance across algorithms in Firefox

For the most part we see that there are two trends emerging: either Firefox 18 with IM is slightly slower than Firefox was before, something I’m sure will change as work on IM progresses, or there is a marked speedup with IM compared to without. The odd one out is the split algorithm, which uses String.split() – a regression that is covered in bug 794117 and/or possibly bug 728045.

The speedup seen in the algorithms iter_char_counting, iter_char_counting_nc, iter_char_counting_pi, fast_for_loop, and to some extent fast_for_loop_var is probably the result of a new fast path for string[int32] introduced in IM. The additional speedup of fast_for_loop_var can at least partly be attributed to a new fast path for int32+boolean, which JägerMonkey didn’t contain.

Mobile

But how’s the situation in the mobile market? Using some of the more popular browsers available on my Samsung Nexus S (running Android 4.1.1) this is the result

Except for the String.split() case, Firefox and Opera seem to be somewhat in the lead, and again we see the same pattern where Firefox 18 is either slightly slower than 15, or a great deal faster.

Verdict

So what’s the verdict? Cross browser it would seem that one of the String.indexOf() variants is the way to go. Is it the most readable though? It’s hard for me to judge which one is most easily readable, but if I had to choose based solely on that metric, it would probably be fast_for_loop, which sadly has a much lower efficiency. Perhaps there’s a better solution out there that is either overall faster or more readable, and if so I’d love to hear about it.

It’s also important to note that depending on the input string length, and the frequency of the character being searched for it’s entirely possible that the results shown above will change to a completely different outcome. If you’re really worried about performance, you’d better test for your expected input.

Mormon bubbling made easy – or why I ported an EEDT from C to Javascript

First let us get the terminology straightened out:

Mormon bubbling
The idea came from here (semi-nsfw), and basically entails covering parts of an image to force the viewers brain to fill in the blanks
EEDT
Exact Euclidean Distance Transform

As an avid GIMP’er (that’s open source for Photoshop, for those who are wondering) and redditor, I inevitably ended up trying my hands at Mormon bubbling. While it wasn’t difficult as such, it sure was laborious. It got me thinking – would it be possible to do this algorithmically? The answer it turns out, is yes and no.

My early sketch of how it could be done included calculating a distance map from the parts of the image you didn’t want shown. I knew this task to be non-trivial, but I also knew that other people had done it before me – though as I was to find out, not in Javascript.

I settled on an implementation from Animal, more specifically the algorithm called meijster2000. Before I started porting this to Javascript, I noted all the problems I thought I would encounter, such that I would hopefully have an easier time spotting them:

  • Array indexes
  • Pointers
  • unsigned variables

but as I started the port, it became clear that there was another problem: passing around huge amounts of data. In C that’s just a pointer, but in Javascript that would quickly end up hurting the performance. Instead, I opted to deviate slightly from the C version, and make it Object Oriented, which would let me attach the relevant functions to the image object, and keep the data in one place. After some hours work, I had an algorithm that ran in as far as it didn’t have any syntactical or runtime errors. It just didn’t exactly do what it should either. On the way to figure out why, I ran into amongst other things a Firefox bug that reduced my performance by about 1000%, and not reading the specs carefully enough on typed arrays also lead me to one of the most elusive bugs I’ve had to deal with in Javascript.

Working with typed arrays

For those of you who do not know how Array.slice() works, here is an excerpt from MDC:

Returns a one-level deep copy of a portion of an array.

Not having carefully read the rest of the document (for neither Array or typed arrays) this is all I knew – that and experience from using it over the years. This oversight led me to believe that both of these examples would return true:

var foo = Array(0,1,2,3,4); var bar = foo.slice(0,5); foo[0]=42; bar[0]==0
var foo = Float64Array(0,1,2,3,4); var bar = foo.slice(0,5); foo[0]=42; bar[0]==0

However as it turns out, in the latter example bar[0]==42, and that took me hours to spot.

I will note that typed arrays do make a significant difference in speed, so as long as you know what you are doing, use them whenever you can!

A missing difference

It also turned out that I had completely missed one important difference between unsigned u and var u namely that the Javascript variable is not constrained to being an integer (remember I already took care of the unsignedness because I had that on my list) – luckily I didn’t have any other plans for he weekend, which left me plenty of time to debug.

Other biproducts

To create the UI needed for the project, I also created a drop in canvas editor with unlimited undo, and a small lib to make receiving files via drag and drop easier. Both of these could use more features, and I plan on working more on them and describing them in separate posts at a later time. For now, along with my image library, they can be scrounged from the source code of my bubbler.

In addition to my own creations, I also included the work of several others, including Farbtastic (a color picker by Steven Wittens) and StackBlur (a blur algorithm for canvas by Mario Klingemann, slightly modified for my needs).

The bubbling algorithm

The idea is fairly simple. Ask the user to mark out which parts of the image he doesn’t want shown, place a bubble as far away as possible from all markings, include the bubble as a marking, repeat. I’ve set a limit of 30 bubbles, and it stops placing bubbles if the radius of the biggest possible bubble is less than 2.

This is a very naive algorithm, that doesn’t take into account that there might be places you really want shown and others you don’t really care about. The biggest possible bubble wins. I might extend the algorithm to let users place a few bubbles themselves before filling in the rest, or possibly even letting the user place all the bubbles by hand, restricted in size by the distance map.

Demo

Here is a live demo of the project, instructions can be found in the bottom left corner. While it should work in most recent browsers, the best experience can be found in Firefox 4b6, as later nightlies suffer from aforementioned bug making them fairly slow, and chrome is quite a bit slower when it comes to canvas manipulations and slightly slower calculating the distance maps. It does not appear to work in Safari, but I am not sure why.

Common pitfalls of jetpack.tabs events

These are a few mistakes I’ve caught myself and others making over and over, often causing very weird behaviour that may cause you to question jetpack’s sanity – or even your own.

Where am I?

A common mistake I’ve seen repeated several times (and caught myself making a few times too) goes something like this:

// Displays the page url when it loads
jetpack.tabs.onReady(function(){
  jetpack.notifications.show(jetpack.tabs.focused.url);
});

Did you spot the mistake? This is a very concise example, so even if you did, you might not have in a larger more complicated script. The problem is of course, that the currently focused tab, might not have anything to do with the tab that fired the onReady event (the same goes for onFocus of course, though in the latter case, you can argue with slightly higher confidence that the event is probably fired by the currently focused tab.)

So how do you know where you are? the callback function is passed a single variable, but that is a reference to the document, and thus not of much use in this matter. Instead, you should use this.url, so the correct version becomes:

// Displays the page url when it loads
jetpack.tabs.onReady(function(){
  jetpack.notifications.show(this.url);
});

Where’s the content?

This question usually stems from one of two issues. If you’re trying to access javascript objects in the document, you need to use the JSObject wrapper like so:

jetpack.tabs.onReady(function(doc){doc.JSObject.myVar});

But this might fail too, if the object isn’t defined in the main document, but is loaded via a script tag or similar. onReady is triggered before external scripts (and frames, images, etc) are loaded, which means that a lot of content might not yet be available at run time. The remedy, according to the tutorial,  is to use onLoad instead, transforming the example thus

jetpack.tabs.onLoad(function(doc){doc.JSObject.myVar});

however, there is no such thing as jetpack.tabs.onLoad. Why it is in the tutorial, no-one seems to know.

So how do you do it? My best attempt is this:

function doStuff(doc) {
  var window = this;
  window.alert("test");
  $(doc).find("body").css("background","red");
};
jetpack.tabs.onReady(function(doc){
  $(this.contentWindow).load(function(){doStuff(doc);});
  if (doc.readyState == "complete") {
    doStuff(doc);
  }
});

But that doesn’t work either, and I’m not sure why. If anyone has a way that actually works, I’d appreciate a comment.

API inconsistencies

Unlike onReady, onFocus does not pass a reference to the document to the callback function. This “feature” usually shows itself when you attempt something like this:

function showTitle(doc) {
  jetpack.notifications.show(doc.title);
}
jetpack.tabs.onFocus(showTitle);
jetpack.tabs.onReady(ShowTitle);

Unfortunately, it is way too easy to forget to treat these events differently, and not always obvious what mistake you’ve made afterwards. The solution is to either change showTitle to this:

function showTitle() {
  jetpack.notifications.show(this.contentDocument.title);
}

or the call to onFocus to this:

jetpack.tabs.onFocus(function() showTitle(this.contentDocument));

QR Code Generator in Firefox

Some hours ago, a reddit user suggested an extension for Firefox that I thought would be a good example of why Jetpack is going to change the way users think about modifying their browser experience.

It took me literally minutes to complete this extension, and all in 7 lines of code. I can’t seem to make wordpress play along, so go to this page and click “install” to try it out.

This is how easy modifying your browser should be. This is how easy it is going to be for everyone in the not too distant future. Jetpack is here to stay!

Note: other people have written similar extensions the old fashioned way (Mobile Barcoder, and probably other), however this is an illustration on how easy it can be to do it yourself.

Hello World! (polyglot edition)

Sparked from a conversation I had with a friend and fellow programmer some odd hours ago, I decided it would be fun to try to make a program that would run in more than one language. Initially, I thought of Brainf*ck and C(++), but my friend suggested I add Whitespace to the mix too. As it turned out, I managed to throw in Python as well.

After some kind people in #esoteric@irc.freenode.net (I joined originally because it was my first foray into the madness that is Whitespace) told me that this is what is called a polyglot, they helped me shave the last fat off and this is the result. A 384 byte “Hello World!\n” that works in Brainf*ck, Whitespace, Python and C++. Not the worst thing I’ve spent a sleepless night on.

As for acknowledgements, I admit that the bf version was one I had lying around, and I don’t know if it was my own or someone else’s. I did the ws version myself because I couldn’t find a working version anywhere. I do owe thanks to coppro and immibis from #esoteric, they were most helpful in the final stages of the process.

UPDATE: reddit users mallardtheduck, isionous and youreameme have suggested further enhancements, which have resulted in shaving off a further 28  bytes, so here is an updated version weighing in at 356 bytes. Many thanks, reddit!

Breaking “The Next Big Thing in CAPTCHAs”

I just stumbled upon this article (since taken down, see google cache) by Catch My Fame, and figured I would take him up on the challenge to break it. It was a good opportunity to play around with canvas and bookmarklets, neither of which I had tried before.

I won’t bore you with too many details, but the gist of it all is to transfer the image into canvas, then do some edge matching and figure out the best solution.

Without further ado, here are the solutions as a bookmarklet (follow the link to see it, WordPress wouldn’t let me link directly) and as a Ubiquity command

It does have it’s limits, but I’ve only seen it fail once, so it has a success rate of about 98% (which is better than my success rate when using Facebook’s CAPTCHA.) Also, I have only tested in Firefox, so no promises if you’re using something else.

If you want to understand what’s going on, I’d advise you to look at the Ubiquity command first; very little is related to Ubiquity.

Does this mean I caught his fame? :-)

UPDATE: It appears Mr. Catch My Fame didn’t want to display his post touting a broken CAPTCHA system, so he took the post down – the demo is still up, so if you hurry you can still see it in action: http://www.catchmyfame.com/jquery/slider_captcha/

UPDATE 2: I have made a small change in the fitness function, making it much more reliable — you can track changes by looking at the Ubiquity command at github

UPDATE 3: The demo has since also been taken down, but as they say, once burned twice shy, and I made a copy well in advance: sliderCAPTCHA. Also, the original blog post appears in google cache (thanks semanticist/Reddit)

Bespin in Ubiquity and Jetpack

Lately I’ve been spending some time working on Bespin integration in Jetpack and Ubiquity. “But Jetpack already had Bespin support!” some of you might think, an yes it did – but only for people running OSX. The rest of us was stuck with a textarea, styled to look slightly like Bespin (and indeed, most people didn’t even notice it wasn’t the real one). Ubiquity had CodeMirror for certain version of Firefox, but for most users it would be disabled due to an incompatibility between later versions of Firefox (3.1b1+ I think) and the CodeMirror project.

The problem — and the reason Ubiquity didn’t change over sooner — was that the official version of Bespin embedded was running a quite old Bespin (0.2.2), while the current version is 0.4.3, and the former didn’t support clipboard copy/paste in anything but OSX.

Long story short: a few bugs later and a lot of help from some very friendly Bespin devels (thanks Alex and Kevin!), inquisitive readers can now try out Bespin 0.4.3 in Jetpack and Ubiquity. It will require you to apply patches by hand against the latest development versions of the relevant project, or wait until the changes land.

EDIT: here is an updated version of the Ubiquity Bespin patch, that includes autoindent, closepairs, and other useful settings.

Feedback (and bug reports) would be greatly appreciated.

How to track changes in the localization process

Now that Ubiquity 0.5 is on the verge of release, we have scrambled to demonstrate the localization-possibilities by including a couple of alternative languages for the standard feeds (and for the new parser.)

By the looks of it, the languages Catalan, Danish and Japanese are going to make it in, but we hope more will join them before the 0.6 release.

Being a localizer myself (I did the Danish translation) I know the pain of keeping up with ever-changing-, new- and deleted keys. Mostly for my own benefit, I have put together a script that helps you track changes between the templates and the translations. If you place this script in your ubiquity/localization/ folder and run it, it should list relevant information for each language.

Examples:

If you want all the information, simply run the script:

python localization.py

In the current development version, this produces quite a lot of output, so I’ll paste a representative section only:

$ python localization.py
! Missing translation: da/developer.po
! 2 keys are missing in pageedit(da):
----------------------------------------------------
["If you used the 'edit page' command to put the page into editable mode, use this command to end that mode and go back to normal page viewing. If you want the changes to persist on page reload, issue the 'save' command first."]
----------------------------------------------------
["Saves edits you've made to this page in an annotation. They will persist on page reload. You can remove them with the 'undo page edits' command."]
----------------------------------------------------
! 2 keys are superfluous in pageedit(da):
----------------------------------------------------
["If you used the 'edit page' command to put the page into editable mode, use this command to end that mode and go back to normal page viewing."]
----------------------------------------------------
["Saves edits you've made to this page in an annotation. Undo with the 'undo page edits' command."]
----------------------------------------------------

If you just want an overview and not all the details (I imagine this will be useful for project leaders) you can run the command:

python localization.py | grep -E ^\!

which again produces a lot of information but looks something like:

$ python localization.py | grep -E ^\!
! Missing translation: da/developer.po
! 2 keys are missing in pageedit(da):
! 2 keys are superfluous in pageedit(da):
! 1 keys are missing in email(da):
! 1 keys are superfluous in email(da):

The future

The script is not yet added to the ubiquity source distribution, as there are still a lot of changes that I feel need to go in before it is production-ready. I plan to integrate more options into the script, making it easier to get the overview, getting info about only one language, or about only one template. It will also become smarter, and realize when a key has changed, and what it has changed to. Currently as the above example show, it lists each as a missing and superfluous key. As the deadline for 0.5 is looming, I don’t expect it to be ready by then.

For the adventurous localizer, I do think it will already in its present state be a big help, so feel free to try it out and report back with any suggestions, improvements or bugs you may encounter.

Happy hacking and translating,

– cers / Christian Sonne

Making a search command – part 3

Another couple of days have passed, so another round of updates is in order. Some of the changes I’ll mention here affect the previous articles: Making a search command – the easy way! and Making a search command – part 2, so if you’ve read them: read this one too!

This is the part that broke your commands

And in more than one way. If you’ve tried some of the commands before the recent updates, you might have noticed that for some reason, certain queries wouldn’t return any previews, while other would work just fine. This is because some search engines return “strange” results in the middle of the normal results – and only do it sometimes – but the only way the parser knew which titles, previews and thumbnails fit together, was by their index in each of their own lists. This was a very fragile system, and while it it still available as fallback, using it isn’t recommended.

The main difference is that the parameter options.parser.container should no longer select the element that contains all the results, but rather it should select a list containing each result. Each results title, preview and thumbnail will then be found (if available) inside the result, and grouping is no longer dependant on indexes. This means that any given result can have a missing preview or thumbnail, and it won’t affect any of the others.

Obviously, changing the meaning of the parameter means you will have to rewrite any search command you have that uses it. I hope the benefit of stability will outweigh this initial annoyance.

JSON as promised

In my last post, I said I was going to be working on JSON support, and as of about 4 hours ago, initial support has landed in hg. I have successfully rewritten our default google and yahoo commands to use this new system, and still use the JSON API provided by the services, but as I don’t have much experience with this sort of interface, it is entirely possible that I have in some way limited the functionality and thus the compatibility with some services. If this is the case, the JSON support will of course be expanded.

A slight note on the usage is perhaps needed here. Let us take google as an example:

CmdUtils.makeSearchCommand({
  name: "Google",
  url: "http://www.google.com/search?q={QUERY}",
  parser: {type: "JSON",
           url: "http://ajax.googleapis.com/ajax/services/search/web?v=1.0&q={QUERY}",
           container: "responseData.results",
           title: "titleNoFormatting",
           preview: "content",
           href: "url"}
});

First you might notice that there are two fields called url, one outside the parser and one inside. In that order, the first one does exactly what it usually did, except it’s only used if the user presses return to go directly to the search page. The second is the one used for the actual searching if you pass type: "JSON" to the parser. If the API requires you to use POST, this works the same way as you’re used to.

Secondly, if you happen to be uncannily familiar with the structure of the JSON google returns, you might notice that the container parameter I talked about earlier in this very post doesn’t seem to be pointing to each result. Instead, it’s pointing to the container of all the results. If you think about it however, in the HTML-case, jQuery makes sure we are returned a list of results, but in JSON, the parent element holding the results is already a list of results – so it sort of makes sense.

The only other new thing is the addition of the options.parser.href parameter, which was automatically calculated in the HTML version. In JSON, the url is usually held in a separate field, so this is needed to make sure we actually link to the results.

What you can do to speed things up

These changes that break backward compatibility will happen less often – or hopefully not at all – if I get fast feedback and suggestions. The more you complain, the more likely I am to do something about it fast :-P

To try these changes out, run Ubiquity from source, or wait for 0.1.7 or 0.2pre15 where they will most likely be included.