One of the things I cover early on in my course is the problem with traditional layered architecture driving people to create a business logic layer made up of a bunch of inter-related entities. I see this happening a lot, even though nowadays people are calling that bunch of inter-related entities a “domain model”.
Let me just say this upfront – most inter-related entity models are NOT a domain model.
Here’s why: most transactions don’t respect entity boundaries.
That being said, you don’t always need a domain model.
The domain model pattern’s context is “if you have complicated and everchanging business rules” – right there on page 119 of Patterns of Enterprise Application Architecture.
Persisting the customer’s first name, last name, and middle initial – and later reading and showing that data does not sound either complicated or that it is really going to change that much.
Then there are things like credit limits, that may be on the customer entity as well. It is likely that there are business requirements that expect that value to be consistent with the total value of unpaid orders – data that comes from other entities.
The problem that is created is one of throughput.
Since databases lock an entire row/entity at a time, if one transaction is changing the customer’s first name, the database would block another transaction that tried to change the same customer’s credit limit.
The bigger your entities, the more transactions will likely need to operate on them in parallel, the slower your system will get as the number of transactions increases. This feeds back in on itself as often those blocked transactions will have operated already on some other entity, leaving those locked for longer periods of times, blocking even more transactions.
And the absurd thing is that the business never demanded that the customer’s first name be consistent with the credit limit.
What if we didn’t have a single Customer entity?
What if we had one that contained first name, last name, middle initial and another that contained things like credit limit, status, and risk rating. These entities would be correlated by the same ID, but could be stored in separate tables in the database. That would do away with much of the cascading locking effects drastically improving our throughput as load increases.
And you know what? That division would still respect the 3rd normal form.
Which of these entities do you think would be classified by the business under the “complicated and everchanging rules” category?
And for those entities that are just about data persistence – do you think it’s justified to use 3 tiers? Do we really need a view model which we transform to data transfer objects which we transform to domain objects which we transform to relational tables and then all the way back? Wouldn’t some simpler 2-tier programming suffice – dare I say datasets? Ruby on Rails?
Are we ready to leave behind the assumption that all elements of a given layer must be built the same way?