My client's users are roughly evenly divided between local and remote workstations. The remote users have broadband connectivity to the Internet. Concurrent user count could theoretically reach 200; realistically speaking, we're likely looking at a typical concurrent count of 30-50. Table count roughly 30. The most-active tables will likely grow at a rate of 3,500 retained rows per month ("retained", as majority of rows become obsolete after 30 days and will be exported to archive and purged from DB tables).
How would you handle connection? Alpha's native tables? Or some SQL backend via Active Links?
If using Alpha's native tables, I figure my connection options include VPN, mapped FTP, or? There are trade-offs, I presume... I don't have to worry about field/rule compatibility with an SQL backend. And I can use the less-costly RunTime license, avoiding the additional cost of the more expective RunEngine licenses for access to Active Links. But what of performance issues? If hosting on a Windows Server platform, how many VPN connections (and at what throughput) would cause troubles? It's my understanding the encryption/decryption process for VPN can be rather CPU-costly. Would a less-secure FTP mapping be reasonable? I haven't played with FTP mapping in DB applications - where we have a filepointer jumping back and forth in a given file. I've not investigated those technical aspects of FTP.
On the other hand, Active Links seems to allow a more trouble-free connection. But what of performance? I've reviewed forum posts - specifically a couple by Selwyn in which he offers that Active Links provides the benefit of almost universal access, but at some performance cost. That cost has not been clearly qualified from what I can see. Is there a "magic line" (ie: 1 million records or so) that, upon crossing, we would expect to see noticeable lag? Is the number much higher? Or lower? My gut feeling is that 50,000 rows per year will keep me safe from client complaints re: performance for many years. Just looking for agreement. And then there's presumably the issue of table field compatibility - but that's a one-time burden. And the added cost for RunEngine licensing.
Any thoughts?
How would you handle connection? Alpha's native tables? Or some SQL backend via Active Links?
If using Alpha's native tables, I figure my connection options include VPN, mapped FTP, or? There are trade-offs, I presume... I don't have to worry about field/rule compatibility with an SQL backend. And I can use the less-costly RunTime license, avoiding the additional cost of the more expective RunEngine licenses for access to Active Links. But what of performance issues? If hosting on a Windows Server platform, how many VPN connections (and at what throughput) would cause troubles? It's my understanding the encryption/decryption process for VPN can be rather CPU-costly. Would a less-secure FTP mapping be reasonable? I haven't played with FTP mapping in DB applications - where we have a filepointer jumping back and forth in a given file. I've not investigated those technical aspects of FTP.
On the other hand, Active Links seems to allow a more trouble-free connection. But what of performance? I've reviewed forum posts - specifically a couple by Selwyn in which he offers that Active Links provides the benefit of almost universal access, but at some performance cost. That cost has not been clearly qualified from what I can see. Is there a "magic line" (ie: 1 million records or so) that, upon crossing, we would expect to see noticeable lag? Is the number much higher? Or lower? My gut feeling is that 50,000 rows per year will keep me safe from client complaints re: performance for many years. Just looking for agreement. And then there's presumably the issue of table field compatibility - but that's a one-time burden. And the added cost for RunEngine licensing.
Any thoughts?