Consider this article a theory on how Alpha Five version 8, with it's Enterprise Passive Linked SQL capability might be used to power a web community application where tens or even hundreds of thousands of users, participate with little or no server, or bandwidth, constraints. Can Alpha Five do this? I think so. Tell me if I'm wrong.
You may have noticed that the pricing for multi-user Alpha Five runtime licenses get more expensive the more users you add. Whether you buy a 10 user runtime or a 200 user runtime, the maximum number of users, refers to the maximum number of concurrent users, NOT the maximum number of installed users. This restriction is, I believe, an extension of the the system's record locking logic.
But what about Passive Link Tables and remote SQL updates? Since no record locking involved, I believe that there are no limits on how many users can access a remote database. Is this true?
For instance: mySQL database connectivity is included with almost every web hosting plan. And bandwidth use from your web host, doesn't differentiate between HTML bandwidth and mySQL bandwidth, which means using any ordinary web hosting plan, it's possible to have many thousands of users sharing data without incurring any massive hosting or bandwidth costs.
So here's my idea for a new kind of web-application. Since it uses the A5 desktop runtime, I call them desktop-web-application. It's really a desktop application that uses a remote database exclusively. When A5 refreshes a passive link table, it has the ability to restrict or filter the data, which means, if a time stamp field was added to every record, and that field was an index, then the desktop-web-application would be able to easily read only new data from the server, when refreshing Passive-Link Tables. That's a LOT faster than refreshing the whole table each time! As long as the time stamp is updated each time a user modifies a record, only new data need be read, to keep a full and perfect mirror of the remote database on the local computer.
Alpha Five makes this easy. Searching and reading data is all done with the Passive Link Tables. The only hitch is that inserting new records, or updating existing records, cannot be done directly to the remote database. At least not in the traditional, networked data base fashion, where with record locking. No. In this way, the logic of a desktop-web-application is much more like a regular web application than a traditional networked database application.
When inserts and updates are necessary, you have A5 put up a form. Then when the user is ready to write, (just like in a regular web application), they press submit button and a little bit of Xbasic + SQL, updates to the remote database done in an instant. In fact, if you look at the A5 Data Exploer: a right click on any table, gives you you a selection, "SQL Syntax - Insert Statement", where the exact SQL needed is already stubbed out for you. Mapping the Insert or Update form, to the Xbasic SQL statement is easy because both the SQL statement and the A5 form, were created from the same file definition. The SQL insert statement is generated directly from the remote database. The A5 insert or update form is generated directly from the passive link table, which is identical to the remote database.
Maybe I've got something wrong here. But it seems that using this scenario, these A5 desktop-web-applications are the ideal platform for quickly developing web applications that can potentially handle a unlimited number of users, without high costs or slowing down under heavy usage, even in today's real world environment.
I must be wrong about something. There has to be a reason why this is not possible. It seem to me that the Alpha Five Enterprise version now makes it easy to create a large scale web applications. Such a desktop-web-application, working off any remote (mySQL) server, with time stamped passive linked records, can potentially do everything a high end Java or PHP web application can do, BUT... faster and more efficiently. I say faster, because there is no server side services, (other than DB access) to slow the CPU. Meaning, the data does not need to be wrapped in HTML AND the DB table data is never read twice, as is always the case when web application users revisit pages for a 2nd or third time.
So that's my theory? Think about it. Let me know what you think.
You may have noticed that the pricing for multi-user Alpha Five runtime licenses get more expensive the more users you add. Whether you buy a 10 user runtime or a 200 user runtime, the maximum number of users, refers to the maximum number of concurrent users, NOT the maximum number of installed users. This restriction is, I believe, an extension of the the system's record locking logic.
But what about Passive Link Tables and remote SQL updates? Since no record locking involved, I believe that there are no limits on how many users can access a remote database. Is this true?
For instance: mySQL database connectivity is included with almost every web hosting plan. And bandwidth use from your web host, doesn't differentiate between HTML bandwidth and mySQL bandwidth, which means using any ordinary web hosting plan, it's possible to have many thousands of users sharing data without incurring any massive hosting or bandwidth costs.
So here's my idea for a new kind of web-application. Since it uses the A5 desktop runtime, I call them desktop-web-application. It's really a desktop application that uses a remote database exclusively. When A5 refreshes a passive link table, it has the ability to restrict or filter the data, which means, if a time stamp field was added to every record, and that field was an index, then the desktop-web-application would be able to easily read only new data from the server, when refreshing Passive-Link Tables. That's a LOT faster than refreshing the whole table each time! As long as the time stamp is updated each time a user modifies a record, only new data need be read, to keep a full and perfect mirror of the remote database on the local computer.
Alpha Five makes this easy. Searching and reading data is all done with the Passive Link Tables. The only hitch is that inserting new records, or updating existing records, cannot be done directly to the remote database. At least not in the traditional, networked data base fashion, where with record locking. No. In this way, the logic of a desktop-web-application is much more like a regular web application than a traditional networked database application.
When inserts and updates are necessary, you have A5 put up a form. Then when the user is ready to write, (just like in a regular web application), they press submit button and a little bit of Xbasic + SQL, updates to the remote database done in an instant. In fact, if you look at the A5 Data Exploer: a right click on any table, gives you you a selection, "SQL Syntax - Insert Statement", where the exact SQL needed is already stubbed out for you. Mapping the Insert or Update form, to the Xbasic SQL statement is easy because both the SQL statement and the A5 form, were created from the same file definition. The SQL insert statement is generated directly from the remote database. The A5 insert or update form is generated directly from the passive link table, which is identical to the remote database.
Maybe I've got something wrong here. But it seems that using this scenario, these A5 desktop-web-applications are the ideal platform for quickly developing web applications that can potentially handle a unlimited number of users, without high costs or slowing down under heavy usage, even in today's real world environment.
I must be wrong about something. There has to be a reason why this is not possible. It seem to me that the Alpha Five Enterprise version now makes it easy to create a large scale web applications. Such a desktop-web-application, working off any remote (mySQL) server, with time stamped passive linked records, can potentially do everything a high end Java or PHP web application can do, BUT... faster and more efficiently. I say faster, because there is no server side services, (other than DB access) to slow the CPU. Meaning, the data does not need to be wrapped in HTML AND the DB table data is never read twice, as is always the case when web application users revisit pages for a 2nd or third time.
So that's my theory? Think about it. Let me know what you think.
Comment