Coding can be boring
Developing applications, especially on the job, is often filled with repetitive grunt work. Take APIs, for example: Let’s say you work for a supplier of some sort, and your company has some data you need to share with business partners — like available inventory. Allowing business partners to see what you have in stock, reserve some of that inventory, and make requests against your future production helps both businesses work better together.
You’ve already got the data in your database, which is half the battle. That means you probably also have a schema (model) for that data…after all, you would have likely needed it to create the database in the first place. But sadly, none of this is likely to help much now that you need an API. Even if you’re a fan of OpenAPI (aka Swagger), you’re still going to have to write something to get an API up and running, and make a bunch of choices:
- How RESTful should it be?
- How will you support client and server versioning?
- Parameters? Inputs? URL encoding?
- Security, compliance, and OWASP attack-prevention considerations?
Choices, choices, choices. And if you work at a large company, there are almost certainly platform consistency requirements, style guidelines, governance and administration considerations, and other considerations on top of all this. And, sadly, none of this really has anything specific to do with that supplier data you were trying to share with your downstream business partners…
But what if that same CREATE_TABLE statement or other specification used to create your database could also generate the API for you? If you had a “data model to API compiler”, you could just run your database model through it and get an API out of the other end! Then, if that API also could be hosted by a fully managed public cloud service, such as Amazon’s API Gateway, Google’s Apigee, or Microsoft’s Azure API Management, you’d have an end-to-end solution without writing any new code or specs.
Throw in support for GraphQL (instead of REST), and you’d also have a solution that can be made not only codeless but also “SDKless”, allowing your clients to also avoid the challenges of (re)deploying matching code every time you make a minor change.
Now we’ve got a “codeless API trifecta” going:
- Hosting the API doesn’t require any code because it’s handled by a fully managed cloud service.
- Expressing the API doesn’t require any code because it’s generated automatically from a shared data model that also creates your database (and potentially other aspects of your business logic tier as well).
- Calling the API doesn’t require custom code, even as the server evolves it over time; a generic GraphQL client can be used. (Of course, some people _prefer _type-specific SDKs, and those can also be generated by the same schema compiler!)
Generating APIs from data models has a number of other advantages over handcrafting it:
- Simplicity and fast time-to-solution – Because a single data model (schema) is used to generate the database, the API, the SDK (if you’re using a custom one), and often even some scaffolding in your business logic, you can have major elements of an end-to-end business application available and deployed, with production readiness, in minutes instead of months — without writing code for any of them. (Of course, you’ll need your actual application logic and client UI, but at least the “boilerplate stuff” will be handled for you.)
- Guaranteed correctness and consistency – Because this approach is DRY (i.e., everything is generated from the same model automatically), your API, database, and GraphQL client can’t accidentally get out of sync with one another. Many common defects that creep into business software systems, where the root cause is some kind of misalignment or versioning issue, are eliminated by design.
- Zero ops – Using public cloud-based services guarantees that the challenges of babysitting servers and dealing with fault tolerance and scaling up/scale down are handled by someone else’s ops team — at least for your storage and API tiers.
- Built-in versioning and access controls – Not only does the data model turn out to be a convenient way to represent a public API, but it also becomes a great place to hang other critical metadata, such as which parts of your data model should be public versus private, encrypted versus cleartext, and so forth. Like the API itself, these access controls can be compiled directly into the code that implements them. This makes another major element of your platform completely code free…and also ensures that it’s handled by a compilation process that’s uniform, standardized, and fully vetted, instead of a bunch of ad hoc code inserted into every application in a slightly different way.
Wondering if this approach works IRL? We’ve done it with Vendia Share.
Our customers are looking to share critical business data, so it’s especially important that the data in their data alliance ecosystems be consistent, complete, and up to date. (“Up to date” means not only that the data itself is current but also that any data model changes are accurately propagated among all the partners.) It’s a perfect match for codeless APIs!
Vendia uses the approach described above, with a simple developer workflow:
- One of the partners in the data alliance takes the lead and provides the initial data model to Vendia. Vendia Share’s schema compiler translates this model into a number of artifacts, including a codeless API, database schema, and so forth.
- The compiler also deploys those APIs to every partner, choosing the appropriate cloud and region based on that partner’s location and preferences.
- Each partner sets up one or more connectors (to existing data sources) and one or more clients (to the codeless GraphQL APIs). Because the shared API is based on GraphQL, a custom SDK isn’t required, but the compiler also generates one for clients that prefer it. Either way, no special coding is required to get a partner up and running!
- Now each partner is free to transact with the combined system, reading and writing data with operationally and security isolated nodes while also ensuring that all data in the system is kept consistent, correct, and up to date to form a shared source of truth.
- Over time, business needs might change: Additional items might need to be modeled, field names updated, or obsolete information deprecated. Vendia handles this by allowing data schema changes, which are then compiled and redeployed to all the nodes through an automated SaaS platform that maintains the integrity of existing clients and data.
- As business partnerships themselves change, existing members of the data alliance may leave and new ones can come on board. Once again, Vendia Share uses the codeless API approach to make adding new partners quick and easy. (The codeless API approach also couples alliance changes with automated data backfills and archiving, but that’s a topic for another blog!)
Throughout this entire data and data model lifecycle, no API grunt work coding is required — Vendia’s customers enjoy the benefits of a codeless API approach from start to finish.
If you’re not using Vendia Share, you can still get some of the benefits of codeless APIs with these four steps anyone can take:
- Use a managed API service instead of hosting it yourself.While technically this doesn’t make generating the API any more codeless, it certainly simplifies the act of deploying, hosting, scaling, securing, and maintaining the API. It’s a great way to at least remove the foundational ops burden while you work on the steps below.
- Use OpenAPI (aka “Swagger”) to generate your REST APIs.Using OpenAPI lets you abstract some of the details of API specification. It won’t magically align the API with your underlying data model, but it’s still faster and less error-prone than raw coding.
- Switch to GraphQL.Even better than improving your REST game is getting into another ballpark entirely. You can get some of the benefits of GraphQL — such as not requiring SDKs and easier support for data evolution — without the benefit of a data model compiler, though it certainly helps to have one. There are a lot of other benefits to GraphQL, too, including how API security can be applied to GraphQL
- Standardize your versioning and access control mechanisms.While a data model compiler is ultimately the best way to ensure alignment between your data and metadata, you can still create consistency (and likely lower your bug and security incident counts in the process) by adopting a standard way of handling concepts like API versioning, data versioning, and multi-party data access control across your API portfolio. The work you invest to standardize it up front will make creating new APIs, evolving existing APIs, and dealing with bugs and other errors a lot easier over time.