Currently, the spec is pretty specific about what is supposed to happen in this case.
If you scroll down to the POST request portion at Update to-many relationships, you’ll notice the /resource/{id}/relationships/{rel-name} link requires a resource identifier object to be POSTed to add itself to the set of resources related to the primary resource through the rel-name.
This unfortunately, seems to be a 2 step process based on the current spec, while some contributors claim you can use the convention of /resource/{id}/{rel-name} to directly post a member of the relationship, this is unintuitive and hackey and most certainly not in the spec as far as I can see. It also breaks the hypermedia driven paradigm by requiring you to implement stateful information in the URL.
The ideal solution would be for them to allow you to post a ‘resource object’ to the resource relationship link directly, and require the server to create the resource if it is of the correct type and then add the resource to the relationship set, following the conventional error handling process described in the spec.
Nested relationships are a common feature of RESTful APIs. I don’t see anything un-intuitive about posting to /resource/{id}/{rel-name}, or even having an entire set of RESTful endpoints there.
That’s understandable, but those aren’t RESTful APIs. Those are likely CRUD APIs, and json-api is about enhancing and pushing CRUD api users into a more hypermedia driven space.
URL resource partitioning is easy in the short term and handicaps the API designer and consumer in the long term.
/giphy Do or do not, there is no try.
Having said that, I did actually just put up a suggestion on GH about solving this issue here.
If you are going to follow the spec, follow the spec because ultimately the people who benefit from the use of the spec are your users. Snowflake or partial implementations which go outside the bounds of ‘unconditionally compliant’ and ‘conditionally compliant’ to the specification actually hurt the consumer in the long run, as their abstractions for the structure aren’t complete, and they can’t use nice tools like Katharsis to operate against your service.
Back on the initial question, as I posted in github issue, I believe there is a better path than your backreference approach which is currently supported by the specification. Namely you POST a compound document with the user defined ID ‘resource identifier object’ to the relationships link /resource/{id}/relationships/{rel-name} with the new resource in the included section.
However, I think the direct POST / PATCH would be an easier way to implement a client long term.
That might be a better question for @steveklabnik or @dgeb. However, is there a reason you are implementing this all yourself, and not using a library which has done a lot of this heavy lifting for you in your language of choice?
I’m not sure I understand why you make this assertion at all, and identifier is meant to be a key value, why would the format of the identifier matter in any way where a user should not use it? I’ll let you make your arguments for this statement if you chose before I comment.
As far as the back references are concerned, from what I see you are correct. I previously stated exactly that, the backreference method was a valid approach under the specification, but I did make the claim it required a bit of mental gymnastics. I believe this claim is both true and a valid criticism of its usability. The compound document approach is also currently valid within the specification, and far more straightforward.
For the sake of concisely defining the behavior of the specification it isn’t in our collective best interests to dilute the means operating against a server running a json-api service, however I do think as far as semantic sugar is concerned the best approach would be to simply allow the inclusion of a ‘resource object’ in the POST and PATCH collections. My previous paragraph, would of course enumerate the way you COULD do the same now albeit in a more verbose, and constrained fashion.
You’ll find they’re quite similar, except unlike CRUD, we’re abstracting things a bit more. You’ll use links, relationships, and sub-resources to help abstract away things like pivot tables and foreign keys. The hardest part is not getting hung up on your data representation and coming up with more elegant ways to represent your data.
There isn’t a “wrong” or “right” way to do it as long as the spec is followed. JSONAPI is more of a standardized set of tools to accomplish these tasks. Welcome to the real difficult part of programming: systems design.
While I’m very aware of the issues you bring up when designing a hypermedia driven service and agree with the overall premise that system design is the most important and complicated part of the implementation, you still haven’t answered my question. Why “shouldn’t” people use UUID’s? You made no mention of any negative property of their use to recommend against it.
The traditional incremented integer ‘best practice’ is always a worse decision for security and cohesion concerns.
As far as the other methods go, I used the term ‘semantic sugar’ to reference ‘syntactic sugar’ meaning it was functionally equivalent but a more intuitive process.
The proposed modification approach is slightly better than the compound document approach which is markedly better than the back reference approach, when comparing their relative intuitiveness.
@michaelhibay My last post referring to system design was directed at @mikeni and designed to re-align this thread on topic. I’m not going to wreck this poor guys inquiry with pointless bickering.
I say UUIDs are a dumb decision and I don’t like the compound doc approach. You say incremented integers are flawed and don’t like backreferences. Ultimately, he must make the design decision best for his project.
I apologize if my response came off as bikeshedding in your mind. While I feel I am well versed in many subjects, I heartily welcome every opportunity to learn and was very interested to know your thoughts. I was challenging the assertion of UUID usage being a mistake in situations as a chance to learn why you would say this.
The back-reference solution is approaching a bike-shedding discussion which would be better served as a specification level discussion rather than an implementation one to be sure. However, the point is certainly somewhat moot if and when the use of hypermedia can obviate the URL, but hypermedia is unfortunately not a requirement and my statement was the spec should offer limited solutions to the requirement to keep the interaction and documents as uniform as possible.
Michael, in the “Update to-many relationships” of the spec, can you please put an example of a POST with a NEW comment (meaning how the URL and json would look like)? From your statement above, it sounds like adding a NEW comment is a two step process. Are you indicating that you would have to first do a POST to /comments and then a POST to /articles/1/relationships/comments? I started a new thread yesterday about this issue with foreign keys and this does not work in that situation. I’m trying to understand the correct way to deal with foreign keys and updating to many relationships.
The more examples, the easier it is to understand what one should do in different scenarios. That would be very helpful. Thank you.
–Cam–
You can do this with one request using a compound document posted to /comments. I agree, though, there needs to be more use cases of this in action. You would POST the following, and the relationships would be created along with the object.
Until I have a measurably better suggestion, I won’t complain about it anymore. I don’t have numbers to back up any claim that I’d make, and all my complaints would be performance related.
I’ve been researching alternatives, and if I get around to benchmarking them I’ll post the results in a new thread.
I’d be interested in seeing those results. Despite what they reinforce in college, performance related issues which are outside the bounds of algorithmic complexity are generally best addressed when they show signs of being a problem. I regularly have to fight the urge of premature optimization, so I understand where you are coming from.