Protocol context manager
In big microservice based systems or systems with lots of protocols, you often need to make a change that ripples out through the entire system. If you have X services you send a message between them you have X message schemas. Let's manage the schemas centrally
Adding a field to a message should be a simple change in a central system. The rest of the system should just adapt to the new field.
In traditional microservice architectures there's either a monorepo or a repository per microservice and each service has to be changed to add the new field.
In this design each service retrieves its schema from a central source and every area where messages are constructed there is a context area. This context area contains all known state at that point in time.
Someone can move a piece of context to the protocol schema centrally. So adding a new column to a database or to a protocol message is as simple as changing it in one place.
For me, it looked like some "docker registry" (~"schema registry"). So, I asked a friend specialized in devops to comment on this. His thoughts were:
"Makes sense, however a db or files, another state feature are supposed to have only owner, if various services use 1 db, it means you will loose consistency. In terms of schema yes, it makes sense. However owner of service use different versions for API. Once you release some new changes on API, you have to increase API version, like /api/v2/blabla.I think Protobuf or graphql can solve this problem partly."
Me: "So, you mean, a point could be made, that protocol buffers, and API versioning make the central management of messaging schemas not necessary?"
Him: "All modern API providers use versioning... Ok, field appears automatically, but if application is not aware what it is, the field is unuseful."
// Someone can move a piece of context to the protocol schema centrally. So adding a new column to a database or to a protocol message is as simple as changing it in one place.
Personally, I like the idea of developing schemas of data independently of microservices, because then everyone can be part of the development of ontological model of the system, which never ends with just one microservice.
Many services pass on messages or store data from other services within the payload. Sometimes the service in question doesn't care about particular fields particularly but another service might further down the chain.
For example in event driven microservice architecture I have used we had MongoDB store cached payload of data from a previous service to preclude having every service thereafter fetching it. Also it keeps data in sync.
Schemas are used in RabbitMQ systems but are often managed in an ad hoc way. That is you have a Java class in each microservice of the network of microservices that you have to keep in sync. It's a nightmare.
Versioning manually is a solution I just think it could be done automatically. Press a new version button in a schema management UI and add the new fields. And tick check boxes where the field applies for each message kind.
If we want autonomous self managed organisations like DAOs there will need to be some agreement on the protocol format.
The schema registry could be ran in high availability mode.
orderData = db.get_order(order_id)
context.put("order", order_data)
context.put("user_id", user_id)
message = context.render("new-order-message")
Where new-order-message is looked up in the schema registry to see what data it contains
The registry decides what data in context or inside context subobjects appears in the message payload when rendered.
In my email based social network I defined lots of schemas upfront. To handle shares of various things.
I wrote example XMLs of each message. But didn't use the examples after writing them.