In developer blogs over the last year or so, I’ve seen a drift away from a single data model that describes the solution and that every aspect of the app uses. Used to be if you had a customer object, that was it. You used that object and it's properties for everything.

More prevalent now is the concept of a mapping of the data onto the specific use cases that need it. I believe this started out as "views" of the same data. A customer "view" of their invoices would be different that the vendor's "view" of the same invoices.

But it was still the same invoices.

But lately I'm hearing about "domains" where the objects in the domain are unique to the functions they support. In this case, a customer "invoice" would be a different object with different code than a vendor "invoice" used by the back office.

The goal, I believe is "loose coupling". That is, making a change to the vendor invoice to improve functionality doesn't automatically effect the customer "invoice".

For example: you could add a "clerk notes" to an invoice to improve operations for the company, but never change the customer invoice. Hence that portion of the system never breaks. In theory.

I now know why. Table storage makes storing data so inexpensive that it’s cheaper to copy the data around for multiple users – representing it differently in the process – rather than a one size fits all design.

The reason I mention this is because I believe that taking this approach makes sense here. I'm finding the data, the indexing and retrieval are different for different parts of the system.

It almost makes it easier. I can design the client side for their needs and the vendor side for theirs. When data affects both, I have to keep things in sync manually, but as I dig deeper, I believe there's less of this than it might seem.

Now I have to itemize the use cases on the customer app, design the tables and indexes to support those cases and just try building it. Rinse and repeat for the vendor app.

Piece of cake, right?