The day before yesterday Google announced that their crawler will issue POST requests in near future. Their announcement led to discussion about whether issuing POST requests breaks with the semantics of Web Architecture (cf. comments in GET, POST, and safely surfacing more of the web or blogs post such as Google vs. Web Architecture).
However, I doubt that it breaks Web Architecture at all.
Having said that, let me elaborate what I mean when talking about Web Architecture.
R. T. Fielding shows that “early Web architecture was based on solid principles–separation of concerns, simplicity, and generality–but lacked an architectural description [...]“. Moreover, his hypothesis that the “design rationale behind the WWW architecture can be described by an architectural style consisting of the set of constraints applied to the elements within the Web architecture” has proven to be valid by deriving REST as an architectural style. So when I am talking about Web Architecture I am talking about such constraints. Be that as it may, one might think that I mean REST when talking about Web Architecture. However, that is not what I have in mind. When talking about Web Architecture I am talking about constraints implied by specifications, programming languages, protocols etc. that actually build (distributed) web applications in the context of the WWW.
Therefore, Google’s crawler issuing POST request might only ruin Web Architecture if it breaks with constraints implied by HTTP protocol. But that is not the case.
The only valid argument here is that might not be safe to issue such a request, because it is a “possibly unsafe action“. But it is only possibly unsafe; within the context of a web application it can be fairly safe to issue a POST request, because the developers designed it to be that way.
At this point a valid objection would be that there’s no mechanism within HTTP specification to communicate this fact from server to client. So, how can Google deduce that it would be safe to issue a POST request?
Therefore, Google’s crawler –as being a client that is fully consistent with this architectural style– is enforced to issue POST request. Hence, it is not ruin Web Architecture.