July 2024

March 25, 2007

David Baron on versioning

Filed under: integration,webarch — Mark Baker @ 9:29 am

David Baron of Mozilla has
chimed in
on a topic
near and
dear to much of the work we do at Coactus. It looks like it’s part one of a series; I look forward to the rest.

One observation; it looks like David’s using the term “versioning” to mean what I call “explicit versioning”, where the consumer of the document needs to know what “version” the document is in order to understand it. I think that’s an important difference between the general versioning problem which can have “no explicit versions” as a solution. I haven’t looked at the latest draft TAG finding to understand that view, but I suspect they use the term similarly to how I use it.

Update; Stefan Tilkov finds that Arjen Poutsma says essentially the same thing.

• • •

January 2, 2007

Starting with the Web

Filed under: architecture,integration,webarch,webservices — Mark Baker @ 11:07 pm

Starting with the Web

A position paper for the W3C Workshop on Web of Services for Enterprise Computing, by Mark Baker of Coactus Consulting.

My position

For those that might not be aware of my position regarding Web services, I will state it here for clarity.

I believe that the Web, since its inception, has always been about services, and therefore that “Web services” are redundant. Worse than that though, while the Web’s service model is highly constrained in order to provide important degrees of loose coupling, Web services are effectively unconstrained, and therefore much more tightly coupled. As a result, Web services are unsuitable for use at scale; certainly between enterprises, but even between departments in larger enterprises.

With that in mind …

Problem statement

I begin from the assumption then, that Web services are not part of the solution, and that the Web is. And while I believe that the Web is far more capable as a distributed system than commonly believed, there is also little doubt that it is lacking in some important ways from an enterprise perspective … just not as much as some believe.

I suggest that the problem statement is this;

How can the Web be used to meet the needs of the enterprise?

Part of the Solution

When looking at how to use the Web – how to add new “ilities” (architectural properties) or modify amounts of existing ones – we need an architectural guide to help us understand the tradeoffs being made by either relaxing existing constraints, or adding new ones. I suggest that the REST architectural style is a good starting point, because in addition to being rigorously defined and providing well documented kinds and degrees of properties, it is also the style used to craft the foundational specifications of the Web; HTTP and URIs.


I anticipate that most of the improvements that are developed will be REST extensions; solutions which provide additional constraints atop REST, and therefore don’t require disregarding any of REST’s existing constraints. In my experience, this is a desirable situation in general because of the emphasis of REST on providing simplicity, and the role many of its constraints have in providing it. In addition, on the Web, it means that network effects are maximized and centralization dependencies (and other independent evolvability issues) avoided. It’s also often achievable in practice, as long as the problem continues to involve document exchange, in my experience… but not always.

For example, when performance is a make-or-break proposition, and other avenues for improving it have been exhausted, sometimes the stateless constraint will need relaxing. This comes with reasonably large costs – reliability, scalability, and simplicity are reduced for example – but if those costs are fully appreciated and the benefits worth it, then it is the right choice. Because, of course, the solution isn’t REST as a panacea, it’s REST as a starting point, a guide.

The Enterprise Web?

So what features do enterprises need exactly? Some of the common ones mentioned include reliability, security, and transactions. Let’s dig a little deeper into those …


Reliability is often mentioned by Web services proponents as a property in which the Web is lacking. But what is meant by “reliability” in that context? Commonly, it’s used to refer to a quality of the messaging infrastructure; that HTTP messages are not, themselves, reliable. That’s true of course, but it’s equally true of any message sent over a network which is not under the control of any single authority, like the Internet; sometimes, messages are lost.

The issue, therefore, is how to build reliable applications atop an unreliable network.

In general, there is little that can be done for a Web services based solution. As arbitrary Web services clients and servers share knowledge of little more than a layer 6 message framing technology and processing model (SOAP), the only option for addressing the problem is at this level, using that knowledge. So we can take actions such as assigning messages unique identifiers in an attempt to detect loss or duplication, but not much else. Solutions at this level tend to do little more than tradeoff latency (i.e. wait time for automated retries) for slightly improved message reception rates. But even then, applications still have to deal with the inevitable case of message loss. Some might even argue that “reliable messaging” is an oxymoron, and that RM solutions are commonly attempts to mask the unmaskable.

Web servers and clients though, share a lot more than just a message framing and processing standard; they share a standardized application interface (HTTP GET, PUT, POST, etc.. as well as associated headers and features), and a naming standard (URIs), both richer (layer 7) artifacts that a generic Web services solution can’t take advantage of. As a result, the options for providing reliability are greater and, in my experience, simpler.

Consider that a reliable HTTP GET message would make little sense, as HTTP GET is defined to be safe; if a GET message is lost then another can be dispatched without fear of the implications of two or more of those messages arriving successfully at their destination. The same also holds for PUT and DELETE messages; though they’re not safe, they are still idempotent. Known safe and/or idempotent operations are a key tool in enabling the reliable coordination of activities between untrusted agents. Most important though, is that there exist known operations, as this makes a priori coordination possible in the first place.

HTTP POST messages, however, due to POST being neither safe nor idempotent, have some of the same reliability characteristics as do Web services. But despite this, the use of POST as an application layer semantic, together with standardized application identifiers, permits innovative solutions such as POE (POST Once Exactly), which uses single-use identifiers to ensure “only once” behaviour.

The “404” response code is also sometimes mentioned as a reliability problem with HTTP, but it is not at all; quite the opposite, in fact. The 404 response code, once received, reliably tells the client that the publisher of the URI has, either temporarily or permanently, detached the service from that URI. A distributed system without an equivalent of a 404 error is one which cannot support independent evolvability, and therefore is inherrently brittle and unreliable.


Security on the Web today is likely the best example of how the Web itself has been limited by the canonical “Web browsing” application. The security requirements for browsing were primarily to provide privacy, and the simplest solution providing privacy was to use SSL/TLS as a secure transport layer beneath HTTP (aka “HTTPS”).

For the enterprise, and for enterprise-to-enterprise integration in particular, privacy is necessary, but insufficient in the general case as requirements such as non-repudiation, intermediary based routing, and more are added (others can better speak to these requirements than myself). Of course, TLS doesn’t really begin to address these needs, and work is clearly needed.

Luckily, I’m not the first person to point this out. There have been at least two initiatives to add message-level security (MLS) to HTTP; SEA from 1996, HTTPsec from 2005. One might also consider S/MIME and OpenPGP as simple forms of MLS insofar as they’re concerned with privacy and sender authentication.

Interestingly, message-level security is actually more RESTful than transport level security in two respects; that visibility is improved through selective protection of message contents (versus encrypting the whole thing with transport layer security), and that layering and self-descriptiveness (possibly including statelessness) are improved by using application layer authentication instead of (potentially – when it’s used) transport layer authentication.


This is obviously an immensely broad topic that we couldn’t begin to cover in a position paper in detail sufficient to actually make a point. Suffice it to say though, that as with reliability, my position is that the space of possible solutions is transformed considerably by the use of a priori known application semantics (and otherwise within the constraints of the REST architectural style), and therefore that solutions will look quite unlike those used for Web services.

Perhaps there will be an opportunity to work through a scenario at the workshop, if there’s enough interest.


The Web offers a more highly constrained, more loosely coupled service model than do “Web services”. While enterprises can often afford the expense of maintaining and evolving tightly coupled systems, inter-enterprise communication and integration is inherrently a much larger scale task and so requires the kind of loose coupling provided by the Web (not to suggest the Web has a monopoly on it, of course). Even within the enterprise though, loose coupling using existing pervasively deployed standards (and supporting tooling, both open source and commercial) could offer considerable benefits and savings. REST should be our guide toward that end.

• • •

Two more reasons why validation is still harmful

Filed under: integration — Mark Baker @ 12:45 am

Another way to look at the validation problem is through the eyes of a software architect. We’re often concerned about “encapsulation”, the practice of keeping related business logic and data together. That practice can take the form of “objects” in OO systems, or just through the partitioning of data and processing logic in non-OO systems. In either form, the value there is that maintainability is improved.

Schemas, on the other hand, when used in a typical “gatekeeper” style – where the software doesn’t see the document until after it’s passed validation – break encapsulation by enforcing some business rules separately from the software responsible for them. In practice, it’s rarely the case that software doesn’t also have some of these rules, meaning that they’re duplicated, creating a maintainability issue.

Yet another way of looking at the problem is as a communication problem (or lack thereof in this case). In the scenario described in the previous post, the meaning of the document with the field value of “4” is still unambiguously clear to both sender and recipient, it’s just that the recipient isn’t expecting that specific value, and so it rejects the whole message as a result.

It’s true that there will be cases where unknown values like this need special attention; for example, if the “4” represented “nuclear core shutdown procedure #4” and the software only currently knows numbers 1 to 3. But we know how to handle those cases – by defining sensible fallback behaviour as part of our extensibility rules – and schemas play no part in that solution. More often than not, software which can handle the value “3” can also handle the value “4” without any trouble; an age of “200”, a quantity of “100000”, the year “8000”, a SKU following a non-traditional grammar, etc…, and the only reason we don’t accomodate those values is because we don’t currently believe them to be reasonable.

But that’s really no way to write long-lived software, is it? To expose – essentially as part of the contract to the outside world – a set of assumptions made at some soon-to-be-long-past point in time? Is this anything less than an announcement to the world of the inability of the developers of this software to adapt? Isn’t it also an implementation detail, further eroding the separation of interface and implementation?

If the message can be understood, then it should be processed.

• • •

November 10, 2005

On Interface, Implementation, and Reuse

Filed under: architecture,integration,webarch,webservices — Mark Baker @ 2:58 pm

A common technique in software development, but particularly in distributed software development, is the “separation of interface from implementation”, whereby software components are developed to have dependencies only upon the interfaces (the connectors, in architectural terms) of other components, and not their implementations. The purpose of this technique is to isolate the impact of a change of implementation to just the changed component, thereby reducing coupling between components. The technique is so pervasively referenced and evangelized, that it’s practically axiomatic.

Unfortunately, it’s not commonly practiced.

Many who claim that they’re using this technique will tell you that the use of a description language, such as IDL or more commonly nowadays, WSDL, does this for them. It does not.

As mentioned, the purpose of the technique is to isolate the impact of implementation changes. Yet this WSDL contract for a new home buyers database places extreme restrictions on how the implementation can vary because it is so specific to, well, a new home buyers database. Need an interface to an historical database of home purchases, or one to a database of recent automobile sales? Too bad, you need a new interface, and therefore new code to use that service. So much for that separation of concerns!

The lesson here, I suggest, is the following;

True separation of interface from implementation requires designing broadly reusable interfaces.

• • •

November 1, 2005

Principled Sloppiness

Filed under: integration,semanticweb,webarch,webservices — Mark Baker @ 11:52 am

Adam Bosworth has published a great article in ACM Queue titled Learning from THE WEB. As you might expect, it’s very much in the vein of previous missives from Adam, where he highlights the advantages of Web based development, much as we espouse here.

As surely comes as no surprise to my regular readers, I largely agree with the points Adam makes in this article. But rather than focus on those points, I’d like to instead make an observation about his writing style, and the descriptors he uses for the Web, since I feel this makes for an interesting contrast with my own style.

Adam uses words like “sloppy” to describe what I would describe as the principled application of must-ignore style extensibility (as seen throughout Web architecture). It’s not that either descriptor is “better” or “worse” necessarily, just different sides of the same coin … which isn’t recognized nearly enough IMO, by those who promote “better” alternatives to Web based development, e.g. Web services. But then that’s the same point Dick Gabriel was getting at with his schizophrenic and self-contradictory ramblings on Worse is better.

Update: the one part of the article I took issue with – the critique of the Semantic Web – is well countered by Danny Ayers.

• • •

October 4, 2005

Yet Another Weblog

Filed under: integration — Mark Baker @ 7:57 am

If you’re interested in the use of XML in compound documents, I’ve started writing a semi-regular “column” on the xfytec community site (specifically, the news section, but the site’s going to be restructured to make news more prominent).

• • •

August 9, 2005

W3C CDF WG posts first draft of compound document framework specification

Filed under: integration,webarch — Mark Baker @ 12:31 pm

After several months of work, the W3C Compound Document Formats WG has published the first draft of the specification of the base framework and profiles.

This is important work with broader applicability than might seem initially apparent, as although the objective of the WG is to focus on visual formats such as HTML, SVG, and others, the framework should be reusable for non-visual languages in many cases.

As a member of the WG (representing my client Justsystem), the part of the spec that consumed most of my time was the base framework, and in particular specifying how to bind it onto the existing Web in order to maximize its generality as well as the probability of success. This is a lot harder task than it seems, as demonstrated by the fact that this version of the draft asks more questions than it answers about this issue. But that’s ok, it’s still early in the process, and I’m confident that there’s a fruitful way forward. Please though, if you have some experience in the XML-meets-MIME/media-types space, we’d love to hear what you have to say.

• • •

July 12, 2005

JBI for document oriented integration?

Filed under: architecture,integration,webarch,webservices — Mark Baker @ 3:07 pm

When it was first announced last month, I had a quick look at JBI to see if they were making some of the same problems I perceived in the architecture of Web services. I caught a glimpse of figure 3 in section 4.1 where it describes a WSDL-based RPC messaging model, and immediately concluded it was misdirected.

The next week at JavaOne, I ran into Mark Hapner, and we had a talk about SOA, document orientation, REST, and JBI. It was a long, in-depth, thought-provoking talk that left me realizing, amoung other things, that this guy really gets document orientation. I’m not sure he fully appreciates the relationship of it to REST, but he completely understands what it means to do distributed computing with documents. As we were wrapping up, he surprised me by saying that JBI was developed, at least in part, with document oriented and RESTful uses in mind, and that he’d be interested in my thoughts on it. So, hear I am, having a deeper look into JBI.

Ok, so having read pretty much the whole thing (minus the APIs and management stuff), I’m not seeing the document orientation friendliness in JBI at all. The container architecture seems reasonable and could support it, but as it also supports the WSDL messaging model – in my opinion, an anathema to integration – perhaps it suffers from the far too common problem of trying to be all things to all people.

I guess I’m with Don Box on this;

I’ve always believed that the Java folks found the sweet spot with the Servlet+JDBC combination back in the 1990’s

• • •

July 5, 2005

Rhode Island Government Web Services

Filed under: integration — Mark Baker @ 10:13 am

Jeff Barr reports on the Rhode Island Government Web Services effort.

It’s another example of “Accidentally RESTful”, so I hope they take care to evolve this application in the future. But there’s no missing the RESTful, integration-centric objectives of the project. In his blog post announcing it, S. James Willis writes;

Web 2.0 applications lean towards making small pieces of data available to users in such a way that the data can easily be married to other small pieces of data from disparate sources.

• • •

June 24, 2005

Web page as contract

Filed under: integration — Mark Baker @ 12:22 pm

It’s these kinds of integration tensions that remind me why I love the Web so much.

So, apparently, Google went and updated its GMail site, which caused a bunch of Greasemonkey scripts to fail. Conspiracy theories ensued.

Dare Obasanjo takes an extreme position on the issue;

Repeat after me, a web page is not an API or a platform.

It might be extreme, but I think it’s also a reasonable position, as I expect practically all Web developers would laugh out loud at hearing that somehow the form of the HTML they write is supposed to be, in some manner, “consistent” between versions. But I feel it important to note that HTML pages do, in practice, have some important “platform” qualities.

Consider CSS style sheets, and how they bind to HTML pages through the use of selectors and the cascade, both of which use information in the HTML; change the HTML in an “inappropriate” way – changing something that the CSS was depending upon – and whammo, broken Web site.

I hear you saying, “That’s different!”. Yes, it is, to a large extent. The HTML and CSS are authored by the same person (or at least are under the control of a single entity), so the interface between them does not constitute a public platform, just a private one, providing value in the form of a separation of concerns – and the resulting improvement in reusability – to site authors.

But then again, not all style sheets are created equal! Consider user style sheets, created by the end user for their own benefit, and applied by the browser on their behalf without any involvement of the server. For example, a visually impaired user could use them to increase the size of all fonts. As can be seen by some examples though, they’re typically very generic, and so commonly use type selectors, since they are generic to the HTML language, not specific to any site. And that’s certainly a best practice that I think is worth calling out;

For maximum reuse, depend only on the language, not on the form of any particular instance of that language

So, as I see it, this is all part of the ongoing battle between the Gods of Literature; the writer, and the reader. That relationship is incredibly dynamic, at least on a Web-historic time scale, with each side constantly pushing for more control. While I doubt there’s any balance to be found there, in either the short or long term, I would say that it’s interesting to observe that recent initiatives such as Greasemonkey, Google Autolink, and even microformats (by virtue of the information they add to a page making it more reusable for readers) are finally giving the reader their due after years of largely appeasing the writer.

Update; here’s yet another example of this same tension.

• • •
Next Page »
Powered by: WordPress • Template by: Priss