Introducing — the incredible professional’s URL-shortener is a URL-shortener with lots of new stuff designed for an app developer who needs everything and more! 🙂

1. History — we remember the last 15 shortened URLs you’ve created. They’re displayed on the home page next time you go back. Cookie-based.

2. Click/Referrer tracking — Every time someone clicks on a short URL we add 1 to the count of clicks for that page and for the referring page.

3. There’s a simple API for creating short URLs from your web apps.

4. We automatically create three thumbnail images for each page you link through, small, medium and large size. You can use these in presenting choices to your users.

5. We automatically mirror each page, never know when you might need a backup. 🙂

6. Most important for professional applications, you can access all the data about each page through a simple XML or JSON interface. Example.

7. All the standard features you expect from serious url-shorteners.

And it’s just the beginning, we’re tracking lots more data so that as more URLs are shortened by we’ll be able to turn on more features.

34 responses to “Introducing — the incredible professional’s URL-shortener

  1. Very well done–I’ve been waiting to see what the API looked like. I really like the approach of creating a tinyurl service in order to generate an index. I’ve seen quite a few people go at the problem from the other direction, by trying to index tinyurls shared on twitter and other places, and bubble up popular ones, but the approach you’ve taken is a lot more direct and elegant.

    ps. fans may want to check out this bookmarklet I made a little while ago: . It’s a quick way to get a url and bypass the form submit process.

  2. Kortina, tks for the props. I think tinydb + + could = a very hip publishing platform. We need to discuss.

    And, the bookmark is bitchen. Thank you.

  3. @jayridge
    Yea, for sure. Hit me up on twitter or email me when you’re available to discuss.

  4. Pingback: Scripting News for 7/8/2008 « Scripting News Annex

  5. Pingback: frEdSCAPEs 3.0 » Blog Archive » Uit de stallen van SwitchAbit: - the incredible professional’s URL-shortener

  6. Pingback: launches today (Scripting News)

  7. Nice. But as Emperor Joseph said in “Amadeus”: Too many clicks. One for bookmarklet. One for the shorten command. Then one for the copy. And maybe one more to get back to the page I was one.

  8. Cool. Can you explain a little more about the click tracking? One click generates a point for both the landing page and the referring page? If two people create a short URL for the same landing page, do they both see the combined clicks or are they tracked under separate short URLs?

    Nice work.

  9. Any plans to incorporate into TwitterFox or other similar app via one click?

    I’ve been needing a way to track my site recs on twitter, so this is very much appreciated so far. As it turns out, people are absolutely clicking on the links I share. Very effective.

  10. Wondering why you’re using 302 redirects:

  11. Kevin, I didn’t ask which code they were using, but I think 302 is the right answer. We want to be in the loop so we can count clicks, and at some point we’ll check if the target is still there, and if not, redirect to the cached version instead of the missing one.

  12. Pingback: launches today · FREE BLOG SHARE

  13. Pingback: a URL shortener with semantic and geo-spatial analysis

  14. Dave, a 301 keeps you in the loop of people who click on the link, a 302 tells everyone that your link is the real one and the redirected one is temporary. What happens when goes down?

  15. LOVE THIS! Thanks so much 🙂

  16. I am curious if you are going to allow at some point a short url owner to modify the destination, and if so, what will happen to the ones I have already set up?

  17. Anyone else not getting to display the short URL. I get the grey box and yellow box after I choose shorten but nothing is in the boxes. Seen on both IE 6 and 7.

  18. Nice service, but I sorta abhor that you guys don’t adhere to any of the cache-control mechanisms that page authors can use to tell you whether you’re even *allowed* to cache a page:

    Please, please fix this — otherwise, it’s the ultimate peeing-in-the-pool behavior.

  19. Hi guys, service you have there with! Just wondering: is it possible for a user to release/reuse a used keyword?

  20. This looks an amazing utility. 🙂

  21. Nice. There’s a couple of features I’d really like to see, though:
    – Click-through warning support. A couple of sites I regularly post links on have a policy regarding the content of pages you can link to, and this can be avoided by linking to a page that explicitly warns users about the page they’re about to visit (for example, that the linked page contains NSFW content). It’d be great to have this functionality integrated with a link shortener. I realise I can use the API to shorten links to a separate warning page, but it’s not as neat a solution as the shortener doing it directly.
    – I want to be able to CNAME my own domain to and have my own personal link shortener for my forum/community site/whatever, with its own namespace of links.

    I’m also curious as to why you’re using explicit URLs for cached pages etc, instead of the more friendly and more portable (should you choose to switch from S3 to something else) CNAME option.

  22. I have a small suggestion. When I click on the browser bookmarlet that you supply, would it be possible to also copy the generated URL to my clipboard? It would save me that step. Thanks.

  23. Feature request: API key.
    Reason: so I (or my org that creates URLs) can keep track of which ones we’ve created.
    How it would work: we request via the API a URL. The request contains the API key (eg
    Need to have the key first so you don’t get confused with being sent URLs containing GET parsing.
    Difficulty: I dunno, you tell me.
    User benefit: it would benefit orgs that want to track a lot (say, a newspaper that prints them to save space).
    Benefit to you: you might even be able to ask a small fee to create one.

  24. Jason Levine [#comment-54] may have a formal point, but there’s no escaping that, once *anything* has appeared on the web, PRAGMA NO-CACHE OR NOT, there’s no way to ensure its non-offline’y static survivability.

    Further on, Jason worries about the quality of cached items: “[…] Given how much of today’s web is based on images, CSS, Javascript, Flash, and other added content, [which can be disputed –Ianf] this means that quite a bit of the content on cached versions of most pages will be missing, so it’s unclear how much utility these cached pages will hold to anyone. […]”

    …to which I can but respond by apeing[sic!]-by-analogy the immortal words of one Charles Darwin: “any intermediate eye form is better than no-eye at all.” Ie. survival of the cachiest! 😉

  25. What happened to my just-posted “A couple of points” item with important suggestions? Did your back-end maul it because I put in a correctly-formed internal-HREF link? I expected at least “awaiting moderation”.

  26. Ianf, I’ve responded over here to your point about whether should be caching content that authors state shouldn’t be cached:

  27. This is a repost of an earlier “couple of points” comment trashed needlessly by your WP.

    You NEED TO ENSURE that no tinyurl, or some such of other known similar services, get processed by A case in point is in [#comment-57 : chris // July 10, 2008 at 12:05 am] above, “” which returns

    which in turn goes to

    …WHICH OF COURSE NEVER RESOLVES, but goes round a bit until returning


    but which could be a viral nastie for all I know.

    To combat such potentially lethal “double-cloaking,” I suggest that you implement better than your current, CONVOLUTED resolve-url mechanism.

    (not to mention deliver the resolved url, and all /feed.php output, with proper, explicit HTTPD content-type “text/plain” or “text/html” headers – currently it’s untyped, causing my mobile browser to declare it unknown/ unsupported data type and saving it to file. Ie. do the same as with your /api?url=url stream, WHICH RENDERS CORRECTLY.).

    Tinyurl allows prepending of any url with “preview.” for such purpose, ie.

    but you could do it one better in multiply-redundant/ minimally-intrusive/ least-typo-prone fashion. E.g. all, or a subset of these:

    … to MAXIMALLY ease the task of remembering correct syntax when needing to manually check any publicly-posted 😉 url prior to clicking it.

    And then you’d really need to preempt the use of selected/ high-profile phish-bait custom labels such as as

    etc. This one’s a no brainer innit rhetorical question.

  28. Manny. Tks for the bug report. This has been fixed.

  29. jayridge, yes errors are gone, but the output of that feed.php op is still served sans explicit http content-type. See midway in my [#comment-74] above.

  30. Ianf: Understood. This is works as you described in development and will be in the next release.


  31. More suggestions.
    1) URL resilience: 0 and O are treated the same by tinyurl, but not by Think you should – it’s an easy transcrition error, and doesn’t cost much in terms of the space.

    2) case-sensitivity? is that really a good idea? OK, so you halve the number of potential links, but you’ve got as many links as you like if you want by adding more digits. If you’re reading out a url to someone, do you want to have to specify caps or not caps?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s