
Originally Posted by
Occam49
[SOAP BOX ON]
I'm new to A5, and not sure how the A5 native file system is implemented, but it seems to be based upon primitive system level file record [I.e. byte range] locking which will never be able to provide efficient or comprehensive RI enforcement in a multi-user application. The fundamental flaw of file system oriented database systems is that all active user applications have write access to the data on disk. The OS knows nothing about the data semantics, it can only read, write and lock data at the byte level on disk
A reliable multi-user database system requires a database engine that is separate from the application that is controlling and managing access of all data and enforcing the declarative RI [and other] rules and insuring transaction integrity using an intelligent write-through global record cache and a transaction log. In this scenario, only the engine [normally] ever writes, updates, or deletes data. This is how ALL DBMS systems work. The global LRU cache minimizes reads as multiple users applications may find the data in the cache avoiding an I/O. Frequently accessed data stays in the cache, unused cached data is over written by new requests for data. The transaction log only writes actual changed data to disk in sequential file which is very fast. In a transaction oriented dbms engine, transaction [covering potentially many separate data updates to many records and indexes,etc] either completes as a whole or fails as a whole, greatly reducing the chances of data corruption. The dirty pages in the global data cache is written to disk at check point intervals reflecting many updates to a given page improving I/O effeciency. The previous description is very much simplified, but is part of the essence of engine oriented dbms.
Using native A5 tables in a busy multi-user environment will ultimately result in data corruption [due to failed multi-table or index writes, or deletes] and poor concurrency performance[due to data locking].
Even in a lightly used system, the data integrity protection gained by using a database engine is well worth the additional complexity. Even try rolling back even a days worth of erroneous updates with a file oriented database? Or, how about recovering to the exact point in time of the last committed transaction after a disk crash when you last backup was 24 or more hours ago without losing or re-entering a single committed transaction? I've done both.
[SOAP BOX OFF]
Bookmarks