RE: [Jersey] Re: Hypermedia clarification

From: Markus Karg <>
Date: Wed, 17 Feb 2010 18:29:45 +0100

> > Yes, if you write a client for exactly that web service, but not if
> you are
> > writing a generic client. Do you have a special browser for each web
> site?
> > Not me.
> One might create a generic HTTP user agent[1] with a plugin API to
> support media types, extension headers (e.g. Link as of now),
> extensions to extensions (e.g. support for certain link rels) but I'd
> certainly not expect a user agent to be generic and handle anything it
> does no know about.

Fielding's dissertation in the inner sense is talking about generic clients,
like browsers, and it would be cool (and possible) to have one -- if I can
surf the web with one browser, why should I have special clients for REST in

The other problem with your approach is: REST typically is used in cases of
"open" systems, which means, systems where third party application have to
access (e. g. mashing, EAI, etc.). You cannot provide a plugin for all of
them, so they all have to reprogram their existing clients to be able to
communicate with your server. If the only allowed header would be "Link"
this would be much more relaxed, and if we would not use headers at all, it
would be no problem at all.

Please, do not respond that your only client is written by yourself... if
that is true, I wonder why you like to do REST at all, since then you can
anyways do what you want without taking care of the definition of REST. ;-)

> We have one kind of such a setup (or configuration) which we call Web
> browsers Opera, Safari, Fire Fox,..). The reason they do not get more
> specific is that they are human driven user agents.

I do not agree. I could imagine that one is writing a "browser" that in can
be remote controlled by batches, COM, whatever (like "cURL" on steroids).
The problem would stay the same, even if you write a script that controls
which links to call in what sequence, and not touching the mouse once by a
human. The possibility to detect links (and especially their "role") is
essential to make this work.

If you hardcode the job into the client, again, you don't need REST at all,
you can just do what you want, so this discussion is useless then (since it
is about REST and not about "what I could do if I would not have to take
care for third party clients"). ;-

> In a machine to machine system I'd have one of such setups for each
> kind of service (e.g. AtomPub or OpenSerch) each of which would be
> capale of handling all the hypermedia semantics (media types, link
> rels, headers,...) that are used by that *kind* of service.[2]

The problem is that "kind" is not defined: The list of what one could invent
is endless. I agree that you certainly can use special Atom headers if you
write a Atom client and so on. But only if the world agrees upon, *which*
particular headers to use. No doubt about that you are right in that point.
But as I said, I do not plan to write a specialized client, but a generic
one, that detects what it can do from the state represented by *any* RESTful
server, not just by a particular one.

> >> And that is
> >> what REST is particularly good at (and what should be leveraged
> inside
> >> the enterprise, too. Although confusing to most CxOs in the
> beginning
> >> :-)
> >
> > Didn't get that point, actually. Can you explain?
> The uniform interface makes it possible to still *have* a conversation
> even if capabilities of client and server do not match. That is much,
> much better than just having the conversation fail. With REST I can
> allways follow my nose and figure things out. Inside the enterprise
> 'follow your nose' might be an ITSM process created as a last resort to
> update clients with insufficent capabilities.

Certainly this is a great benefit of RESTful systems, but you can have the
same with SOAP / WSDL, so it is neither a particular RESTful characteristic,
nor is REST related on that. REST would also work if the conversation would
be totally unreadable, but still keep ist characteristics (stateless,
client-driven, and so on). Actually it is valid to write a RESTful
application using a different protocol that is absolutely non-readable at

> Think of it as controlled failure with a well known path to recovery.
> Even if that is not automatic, it is far better to get a wake up
> emergency call at 4 AM that tells you "we got a 406 beause our client
> does not understand application/fancy-new-stuff (spec at http://...)"
> than it is to be handed a bunch of stack traces of an application you
> probably don;t even know.

Maybe. But this has to be carefully designed. I have seen lots of systems
where it would have been much better to get alarmed at 4 AM instead of
letting the machine go on in an undeterministic state -- which is exactly
what will happen if the client ignores a link since it didn't detect it in a
fancy header...

> (Me babbling? Yes. However, there has to be some vision towards what
> REST in the enterprise might actually mean).

Why "there has to"? There is one: The dissertation. ;-)