In the past weeks, @mikependon and I works to stabilize #RepoDb; a lightweight and fast repo-based ORM for .NET. Mike went so deep that we have used direct IL emit codes to optimize our data readers. But if we are fast, how fast are we compared to other micro and full-ORMs? This perf report is our first attempt.
TL; DR; You may download the full test results here.
For this task, we have chosen to run RepoDb with the ff benchmark frameworks.
Frans Bouma’s RawDataAccessBencher
StackExchange/Dapper/Performance (Planned) https://github.com/StackExchange/Dapper/tree/master/Dapper.Tests.Performance
NOTE: The intention of this is to show the fetch performnce of RepoDb against other full and micro ORMs. It was designed to be so simple and reflects what most of developers would do. @fbouma have said well enough the intention of the tests here. He wen’t on to answer all criticisms here, here, and here.
- Release build / True
- Client OS / Microsoft Windows NT 6.2.9200.0 (64bit)
- Bencher runs as 64bit / True
- CLR version / 4.0.30319.42000
- Number of CPUs / 8
- SQL Server version used / 13.00.1601
The Results with RawDataAccesBencher
We use AdventureWorks to fetch 31k rows from SalesOrder table. The table represents most of common data types which is good to test the overhead of type mappers.
This result is quite surprising and I didn’t expect us to be far down. I suspect we may have introduced some overhead when we placed special handling in the IL-based reader to support spatial, BLOB and DateTimeOffset data types.
I can jump off the cliff now :/. We’ve got some homework to do. Mike suggested it could be because we dont have persistent connection and we dont support “raw” SQL to object mapping so there might be an overhead in our object-query parser.
The Results with OrmBenchmark
The graph shows that we are quite fast for fetching sets but was slowest in fetching single item even if we re-use connections per iteration (Persistent Query). This is consistent with our findings from RawDataAccessBenher.
A good ORM is one that gets the job done with as little CPU and memory footprint as possible. While I understand there always trade-off between robustness and performance, a good fetch performance should have greater weight in selection criteria. After all, its what application users feels.
Clearly, we have yet to win in this game. We need to look deeper and maybe re-think the way we do things. It would be an exciting journey.
- Remove some environment influence, let’s run the tests on a Docker container or fresh Azure VM.
- Cover fetch performance for graph sets.
AdvendtureWorks 20018 R2 BAK File
An academic approach to ORM benchmarking
Old but gold ORM-Battle https://ayende.com/blog/4122/benchmarks-are-useless-yes-again https://weblogs.asp.net/fbouma/fetch-performance-of-various-net-orm-data-access-frameworks https://weblogs.asp.net/fbouma/fetch-performance-of-various-net-orm-data-access-frameworks-part-2 https://weblogs.asp.net/fbouma/re-create-benchmarks-and-results-that-have-value https://weblogs.asp.net/fbouma/net-micro-orm-fetch-benchmark-results-and-the-fine-details