Product Management
Is an API really needed?
Freshness Warning
This blog post is over 18 years old. It's possible that the information you read below isn't current and the links no longer work.
23 Oct 2006
Drew McClellan suggests that perhaps your website could be your API, using microformats, semantic HTML, and a little screen scraping.
Almost three years ago, John Udell suggested something similar and I wrote a rebuttal. Much of it applies to McClellan’s suggestions, too. Here’s a reposting of my original piece.
Jon Udell waxes nostalgic about the good old days of screen scraping HTML in order to build the first generation of Web services. That’s great and I’ve built my share of screen scraping applications as well. But then Udell goes on to propose that companies should abandon modern Web services technologies in favor of screen scrapes helped along by well-formed XHTML.
Udell’s reasoning is that Web services through SOAP is too complicated. "But if I’d had to register for an API key and locate WSDL documentation for each of the three services whose results I compared, I probably wouldn’t have bothered," he says. His entire argument is based on his experiences with the Google API and their specific SOAP implementation.
Google requires that anyone using their API register for and use an API key—an identifying token that lets Google track the usage of their API down to a specific user or application. Google requires it, but the SOAP protocol does not. Most SOAP services don’t have any sort of key and if you were building a tool for an intranet, you probably wouldn’t need or want such a scheme. Not only does Udell miss that point, but he also forgets that SOAP isn’t the only Web services technology.
Udell says that a primary threat to your intranet is disuse. If people find it too difficult to create and use information on the intranet, they won’t bother. That’s true; if you create onerous processes that content creators must follow, they’ll find ways around them, publishing their information in ways that you don’t expect. But Udell’s assertion that building data access through Web services will make it too difficult for people to use your data is preposterous. Screen scraping is more difficult and more apt to fail than using stable, published APIs. And with REST, the APIs are just as easy to access as any other Web document.
As an example, let’s use product data for my new camera. What’s easier—scraping $1 or getting it in XML format from their REST interface? For each method I have a unique URL that I request to get the data. There aren’t any complicated steps to follow for either system. But the HTML version, even if it were well-formed XHTML, would be significantly harder to retrieve meaningful data from. And changes to the display of the information would often mean changes to the structure of the HTML, necessitating further changes to my screen scraping application. Amazon does require a developer’s token (an API key, essentially), but again, that’s only so they can control usage. There’s no reason at at all that a REST system like this couldn’t be built without it.
But doesn’t creating a REST interface mean more work for the content producers? Probably not. Presumably your corporate intranet is using some sort of content management system. Otherwise there’d be no way to enforce this XHTML-only rule. Furthermore, that content management system probably stores the content in a database somewhere separate from the presentation of said content. All you need to do is build one REST interface that retrieves the required content from that database and presents it as a pre-determined XML document instead of an HTML document. The content producers could go along creating content as they always have, blissfully unaware that they are also populating a Web service.
Udell’s XHTML scraping suggestion has significant risks as well. Remember that making the process of content creation difficult will ensure that people find other ways to create content—ways that you don’t control. But in advocating screen scraping, Udell says, "it’s true that creating XHTML pages requires more discipline than hacking out HTML, and it may incur some retraining costs." Not only are you going to make it difficult for people to build systems that automatically consume information, but you also propose making it more difficult to create it?
People will flock to things that are easy. RSS took off because it was easy to create and easy to consume. Sure, it would be possible to create screen scraping applications that would take any well-formed XHTML content source and pull that content into a newsreader. But it’s much easier for everyone concerned to create a simple, easy-to-understand format that contains all of the information in logical chunks and just run with it.