PDA

View Full Version : SQL vs DBF


ABC123

Alan Lee
03-09-2008, 11:09 AM
What are the advantages or disadvantages of either as the back end?

Thanks!

Selwyn Rabins
03-09-2008, 01:10 PM
What are the advantages or disadvantages of either as the back end?

Thanks!

companies like Microsoft and Oracle have invested millions (possibly billions) to ensure the scaleability, reliability and performance of their sql databases.

if you use sql databases in a5, you get to piggy back on that investment.

while there are cetainly a lot of very impressive applications that have been developed on top of .dbf files, if you want hugely scaleable applications etc. then .dbf files are not recommended.

the great thing about v9 is that you can switch from .dbf files to sql tables incrementally. you can convert individual tables to active link tables as you see fit. the Export operation now allows you to export a .dbf to a sql table so converting an existing table to a sql table is easy.

a) export the data to sql
b) create an active-link to the sql table.

DaveM
03-09-2008, 09:37 PM
Mike,

One advantage might be the size limitations. I believe I read at MySQL that the tables would be limited to 4 gig with the free server and unlimitef if you purchase their enterprize as opposed to the 2 gig limit on dbf files.

sql is the choice if you are doing web without alpha( and I believe with Alpha), and I love my .dbf tables for about 25 years now.

ITALIA
04-29-2008, 09:11 PM
Guys,

I'm really interested in the new SQL based database of Alpha. And so is my boss, that is seriously thinking about purchasing the new version 9 platinum for his business.

There's just one thing that we are concerned about: how easy or hard is to convert a dbf database (we are currently using Alpha5 version 6) to a SQL database?
Our Inventory/Chargeable System is pretty big and complicated. It involves a lot of database tables and calculations that make it very slow and also not much reliable (I have to "refresh" databases indexes almost every day as they go automatically out of syncro - that's one of the reasons we want to get rid of dbf tables, totally unreliable for big database systems).

We hope there's an easy way to convert our existing dbf database to an SQL database without losing any field rules or set links. Also, we hope that annoying "Refresh database indexes" won't be necessary any more as per "normal" databases.

I'm waiting for an exaustive reply and then I'll probabily start downloading the v9 30-days trial and perform some testing.


Thanks in advance.





MAURO TRENTINI

DaveM
04-29-2008, 09:29 PM
How big(in your own words) is your database? This may be a real key as to what you need.

There are users with HUGE databases in here without the index problems. Indexes should not have to be rebuilt unless some operation is making them go out. Usually this stems from using indexes as filters and adding tons of them. Indexes are very helpful if used right.

ITALIA
04-29-2008, 10:06 PM
Thanks for your reply Dave.

Find below some information about my database:

53 database tables
13 sets
57 forms
80 reports
the biggest table (products) involves about 9000 records, then Orders table (currently ~6000 records) that at the end of the year could reach about 10000 records (at the end of each financial year, we delete all the data and start from an empty database). All other tables are below 2000 records (some of them are really small, 20 - 40 records)


I'm not sure if this is the kind of information you were looking for. If not, let me know and I'll provide all the details you need.



There are users with HUGE databases in here without the index problems. Indexes should not have to be rebuilt unless some operation is making them go out.

Regarding the tables indexes, maybe I didn't built them properly the first time and now I'm having issues, but they seem ok to me.
For example, on the product table I created an index on the product codes that are unique; on the orders table I set up an index on the order number that is auto-incrementing, and every day I have to rebuild the index on the order number as it goes totally out of syncro.

Could you explain what you mean with the below statement? (I'm not sure what you mean with "adding tons" and "using indexes as filters").

Usually this stems from using indexes as filters and adding tons of them. Indexes are very helpful if used right


Thanks a lot for your help.




MAURO

Steve Workings
04-29-2008, 10:45 PM
Mauro - I wouldn't classify your system as big - maybe in the medium basket.

If you're having the problems you describe, switching to SQL isn't your solution. Fixing your design and your procedures would be a better use of your time and money.

I'm not telling you not to switch to SQL, but from what you describe I don't see much reason.

But, at the very least, do upgrade to Version 9. That's a sure-win for you for the speed, features and maturity.

DaveM
04-30-2008, 04:08 PM
my db has 40 tables, 12 sets, 44 forms, 28 stand alone browses, 98 reports, 118 scripts and functions and the largest table has 42,000 records. In one location, it has about 18 users at any given time off a server that is not super fast and a variety of work stations using shadowed tables. there are no complaints of speed.

Lightning query optimization may do you some real good. Look that up on here. I know it has been brought up.

below: this is my worst index(using as a filter):
I am working on changing it as soon as i can. mostly using queries now instead of indexes. some indexes are necessary though.

I have never had to rebuild them in 5 years and they work with v7, v8 and v9.

Sometimes it is something in code somewhere else that messes your indexes up. it would be interesting if you could follow along in the app and figure out what operation, script, etc is messing the indexes up.

Raymond Lyons
04-30-2008, 07:52 PM
Mauro,

I'll second what Steve Workings said. Your DB is not large at all--I have had complicated DB's with a table with hundreds of fields and around 1 million records. While I would not do this with dbf tables today (yes there were problems, not the least of which compacts and appends with dup checking took forever), there were relatively few index related problems. As Steve said, you should re-examine how indexes are being used in the DB.

Ray

ITALIA
04-30-2008, 08:40 PM
Guys,

I appreciate your replies. I'm trying to get an idea of what my problem could be.
It's good to read that bigger databases work just fine with Alpha5 without indexing problems; obviously we (me and my collegues) didn't build the database properly (just reminding you that we're using version 6 - just in case there are known issues as Dave mentioned v7, v8 and v9 only).

It just sounds weird because we usually build applications based on SQL databases with either J2EE, .NET or PHP. We started to "learn" Alpha5 because requested by one of our customers.
I thought our database was big because of Alpha5 v6 limits; we built applications MUCH bigger than this with "normal" databases like Microsoft SQL Server, and in 9 years of developing, we NEVER had an issue with the database indexes. Not one script not one line of code nothing that ever corrupted indexes.

Maybe there's something we don't know about dbf indexes. I understand that there might be something in the code somewhere that messes my indexes up, but how does this happen? indexes are corrupted within the code? This can't happen with an SQL database I'm sure of it. That's why at the beginning I asked if turning into an sql database would fix the issue.

Also where do I start to "debug" my program to understand what's wrong with indexes? This is turning into a nightmare.


Thanks to everyone anyway, it's helpfull to hear other opinions.



MAURO

DaveM
04-30-2008, 11:13 PM
known issues as Dave mentioned v7, v8 and v9 only).



Sorry I said that wrong. I was just mentioning this because v7 - v8 are the prominent ones today. I had basically the same tables from dos on up starting with V1. I redid the indexes when going to a5 from dos. They are pretty much the same and I had no problems with v6 at all.

In v7, I split the names table into 2 tables due to my own inadequacies of maintaining the fields in one table. each table has about 3000 records with over 100 fields each, that are fully utilized. They are about 5.5 megs in size and growing on the local machine. On a client's machine, they are about 15 megs each if that helps.

DaveM
04-30-2008, 11:17 PM
Mauro,

Do you have a query being created somewhere that is named the same as an index?? Seems I read something like that happening in a post a long time ago.

Is it possible you have more than one operation that could be happening on a table at the same time. Maybe one is not finished before the other one starts? Maybe more than 2?

Something is causing the index problem. Adding a record alone should not do this. What table set?? what else happens at the same time? or What COULD be happening at the same time?

During the record add, does a script or udf have to run?

Dbf's have been associated with mdx, cdx, fpx and several other index file types. I think what alpha uses is an exact or close variation of the foxpro index system. I first heard of lqo as associated with foxpro and clipper. I'm 61, so memory is thinner now, so I must say, I think.

Is it possible to get a copy(som sample data also) uploaded here so we can work on it? Exact steps as best you can of when the problem occurs?

CALocklin
05-01-2008, 03:11 AM
Dissenting opinion....

Sometimes there is nothing "wrong" with the code but indexes still get corrupted.

There may be something in the code that is "causing" the problem but it isn't anything that is "basically wrong". I won't argue that changing something in my code might fix the problem but WHAT to change is the problem. In other words, it might be possible to change some code somewhere to solve the problem - i.e., a work around - BUT it won't be obvious just from looking at the code. That's the big problem - since it isn't obvious, how do you find the source of the problem? (I only have 556 global scripts and functions in the app that has the indexes and problems listed below - so checking the source code should be easy!)

Indexes are also more likely to get corrupted on a multi-user networked application. I have a couple apps that NEVER have index problems for single users but have progressively more index problems as more users start working from the network.

My indexes seldom use filters (most filtered indexes were created automatically by Alpha) and never use variables. Most of my indexes use only 1 field and almost all use no more than 2 fields.

Following are all the index definitions for my most complicated app. It runs fine for single users, runs fine for some 2 user networks, and gets corrupted almost daily on a couple other 2 user networks. The one 4 user network only seems to get corrupted when one particular user is working. (Hmmm, some of this sounds kinda like system issues doesn't it. But, we've changed all the network cards and now all the computers on one of those 2 user networks and we still have problems. Go figure! One person suggested that a system with both some real old, slow computers and some real fast computers could cause problems. Maybe??)

One thing I can say is that there is a lot of data entry and/or editing going on when they have problems. I don't think indexes ever get messed up just from reading data.

Note that the index issues only seem to happen on the Agents, Ord_head, and Ord_Item tables and these tables don't have any filters. Of course, these are also the ones most likely to be edited. The tables that do have filters in their indexes never seem to be a problem.

For this particular app, the index problems have been so bad that I will happily send $100 to anyone who can provide an actual, workable solution. (Sorry, deleting all the indexes doesn't count as a solution.)

TABLES
------------------------
Structure: Index Name -- Sort Order -- Filter -- Options
AGENTS -- e:\a5v7\trak-it_sp\app\
Indexes:
Agent_Nam_ -- AGENT_NAMF -- --
Agent_Nof -- AGENT_NOF -- -- U
Agent_No_ -- AGENT_NOF -- --
Agnt_Phon_ -- AGENT_NAMF+BILL_PHONF -- --
Bill_Phon_ -- BILL_PHONF -- --
Client_No_ -- CLIENT_NOF -- --
P_S_Phon_ -- P_S_PHONF -- --
BROKERS -- e:\a5v7\trak-it_sp\app\
Indexes:
Broker_Nof -- BROKER_NOF -- -- U
Broker_No_ -- BROKER_NOF -- --
CITIES -- e:\a5v7\trak-it_sp\app\
Indexes:
City_Name_ -- CITY_NAMEF -- --
CLIENT_MEMO -- e:\a5v7\trak-it_sp\app\
Indexes:
Client_No_ -- CLIENT_NOF -- --
CLIENTS -- e:\a5v7\trak-it_sp\app\
Indexes:
Client_Dcf -- CLIENT_NOF -- DISCONTF=.F. --
Client_Nof -- CLIENT_NOF -- -- U
Client_No_ -- CLIENT_NOF -- --
Phone_ -- PHONEF -- --
Realtorf -- REALTORF -- DISCONTF=.F. --
Realtor_ -- REALTORF -- --
Sign_Phon_ -- DEF_SN_PHF -- --
Stld_Ffff6b25 -- BROKER_NOF -- --
INSTALLERS -- e:\a5v7\trak-it_sp\app\
Indexes:
Inst_Name_ -- NAMEF -- --
Namef -- NAMEF -- ACTIVEF --
INV_HEAD -- e:\a5v7\trak-it_sp\app\
Indexes:
Bill_Phon_ -- BILL_PHONF -- --
Client_No_ -- CLIENT_NOF -- --
Inv_Cldt_ -- CLIENT_NOF+INVERT(CDATE(INV_DATEF)+INV_NUMBF) -- --
Inv_Date_ -- INV_DATEF -- --
Inv_Dtph_ -- CDATE(INV_DATEF)+BILL_PHONF -- --
Inv_Numbf -- INV_NUMBF -- -- U
Inv_Numb_ -- INV_NUMBF -- --
Inv_Phdt_ -- BILL_PHONF+INVERT(CDATE(INV_DATEF)) -- --
Nvrt_Clnu_ -- CLIENT_NOF+INVERT(INV_NUMBF) -- --
INV_NEW_LIST -- e:\a5v7\trak-it_sp\app\
Indexes:
Bill_Phone -- BILL_PHONF -- --
Client_Nof -- CLIENT_NOF -- --
INV_PAY -- e:\a5v7\trak-it_sp\app\
Indexes:
Client_No_ -- CLIENT_NOF -- --
Clnt_Date_ -- CLIENT_NOF+CDATE(DATE_PAIDF) -- --
Inv_Numbf -- INV_NUMBF -- -- U
Inv_Numb_ -- INV_NUMBF -- --
Paymt_Nof -- PAYMT_NOF -- -- U
INVOICE_DETAIL -- e:\a5v7\trak-it_sp\app\
Indexes:
Client_Nof -- CLIENT_NOF+WORK_ORDF -- INV_NUMBF=\"\" --
Client_Nof1 -- CLIENT_NOF+WORK_ORDF -- LEFT(CHARGESF,6)=\"Refund\" .AND. INV_NUMBF=\"\" --
Client_No_ -- CLIENT_NOF -- --
Clnt_Rev_ -- CLIENT_NOF + INVERT(CDATE(COMP_DATEF)) -- --
Clt_Noinv_ -- CLIENT_NOF -- INV_NUMBF=\"\" --
Comp_Date_ -- COMP_DATEF -- --
Inv_Numb_ -- INV_NUMBF -- --
Master_No_ -- MASTER_NOF -- --
Stld_Ffff2 -- INV_NUMBF+WORK_ORDF -- --
Work_Ord_ -- WORK_ORDF -- --
ORD_HEAD -- e:\a5v7\trak-it_sp\app\
Indexes:
Agnt_Name_ -- AGNT_NAMEF -- --
Bill_Ph_ -- PHON_BILLF -- --
Client_No_ -- CLIENT_NOF -- --
Master_No_ -- MASTER_NOF -- --
Prop_Addr_ -- PROP_ADDRF -- --
Rltr_Name_ -- REALTORF -- --
Sign_Ph_ -- PHON_SIGNF -- --
ORD_HEAD_MEMO -- e:\a5v7\trak-it_sp\app\
Indexes:
Master_No_ -- MASTER_NOF -- --
ORD_HEAD_UP_ORDS -- e:\a5v7\trak-it_sp\app\
Indexes:
Agnt_Namef -- AGNT_NAMEF -- --
Client_Nof -- CLIENT_NOF -- --
Master_Nof -- MASTER_NOF -- --
Phon_Billf -- PHON_BILLF -- --
Phon_Signf -- PHON_SIGNF -- --
Prop_Addrf -- PROP_ADDRF -- --
Realtorf -- REALTORF -- --
Up_Qtyf -- UP_QTYF -- --
ORD_ITEM -- e:\a5v7\trak-it_sp\app\
Indexes:
Comp_On_ -- COMP_ONF -- --
Master_No_ -- MASTER_NOF -- --
Mast_Date_ -- MASTER_NOF+CDATE(WO_DATEF) -- --
Prt_Dtrev_ -- INVERT(CDATE(PRINT_DTF)) -- -- U
Req_Date_ -- RQST_DATEF -- --
Work_Ord_ -- WORK_ORDF -- --
Wo_Date_ -- WO_DATEF -- --
ORD_ITEM_MEMO -- e:\a5v7\trak-it_sp\app\
Indexes:
Work_Ordf -- WORK_ORDF -- --
OTHER_CHARGES -- e:\a5v7\trak-it_sp\app\
Indexes:
Work_Ord_ -- WORK_ORDF -- --
PAST_DUE_REPORTS -- e:\a5v7\trak-it_sp\app\
Indexes:
Client_No_ -- CLIENT_NOF -- --
Phone_ -- PHONEF -- --
Realtor_ -- REALTORF -- --
POST_ARM_LIST -- e:\a5v7\trak-it_sp\app\
Indexes:
Descript_ -- DESCRIPTF -- --
POST_COLOR_LIST -- e:\a5v7\trak-it_sp\app\
Indexes:
Descript_ -- DESCRIPTF -- --
POST_RENT_WARNINGS -- e:\a5v7\trak-it_sp\app\
Indexes:
Client_No_ -- CLIENT_NOF -- --
POST_TYPE_LIST -- e:\a5v7\trak-it_sp\app\
Indexes:
Descript_ -- DESCRIPTF -- --
PRICES_ORD_TYPE -- e:\a5v7\trak-it_sp\app\
Indexes:
Price_Codef -- PRIC_CODEF -- -- U
Pric_Desc_ -- PRIC_DESCF -- --
PRICES_OTHER -- e:\a5v7\trak-it_sp\app\
Indexes:
Price_Codef -- PRIC_CODEF -- -- U
Pric_Desc_ -- PRIC_DESCF -- --
REPLACE_LIST -- e:\a5v7\trak-it_sp\app\
Indexes:
Descript_ -- DESCRIPTF -- --
RIDERS_AGENT -- e:\a5v7\trak-it_sp\app\
Indexes:
Riders_ -- RIDERSF -- --
RIDERS_STCK -- e:\a5v7\trak-it_sp\app\
Indexes:
Riders_ -- RIDERSF -- --
SIGNTYPE -- e:\a5v7\trak-it_sp\app\
Indexes:
Sign_Type_ -- SIGN_TYPEF -- --
SUB_CONTRACTORS -- e:\a5v7\trak-it_sp\app\
Indexes:
Contractr_ -- SUB_NAMEF -- --
Sub_Numbf -- SUB_NUMBF -- -- U
TRAK_IT_CONFIRMATION_OUTBOX -- e:\a5v7\trak-it_sp\app\
Indexes:
Date_ -- RECEIVE_DATE -- --
TRAK_IT_GENERAL_OUTBOX -- e:\a5v7\trak-it_sp\app\
Indexes:
Date_ -- RECEIVE_DATE -- --
Folder -- FOLDER -- --
Msg_Id -- MSG_ID -- --
UNIT_TYPE -- e:\a5v7\trak-it_sp\app\
Indexes:
Unit_Typef -- UNIT_TYPEF -- --
ZIPCODE -- e:\a5v7\trak-it_sp\app\
Indexes:
Zip_ -- ZIPF -- --
Note that index names ending with an underscore are indexes that I created for a specific need. The indexes ending with "f" were created automatically by Alpha.

For anyone who purchased my AIMS Grab Bag (http://www.aimsdc.net/Programs.htm#Funcs) addon, this list was created with the App Basics routine by selecting only Tables / Indexes to be listed.

ITALIA
05-01-2008, 04:23 AM
After reading CALocklin's thread i feel a bit better, I'm not the only one having this issue.


Indexes are also more likely to get corrupted on a multi-user networked application. I have a couple apps that NEVER have index problems for single users but have progressively more index problems as more users start working from the network.

This is a valid point, I totally agree.
When me and 2 collegues tested the program that we developed, we never had indexing issues, but the same program running on a multi-user networked environment (the program is on a shared drive that users can access from their computers) needs database indexes to be refreshed almost every day.
In my case there could be a maximum of 16 people using the program at the same time, but I believe the average is about 8 people using the program simultaneously (2-3 in warehouse, 3-4 customer service, 1-2 finance and myself) and entering data (not just reading).

Also, I just wanted to add a personal thought:
in Alpha5 if you right-click on a database table you can select an operation called "Update indexes". You can't find this or similar operations on "normal" databases like SQL, probabily because there's no need for it.
Therefore, if they created and provided such an operation, it means that the tables' indexes in Alpha5 could be corrupted somehow (and they actually do).

My dilemma is:
is this issue DBF related? and turning into SQL would it fix the issue?

My boss already approved a proposal of re-building the entire program with technologies like .NET or J2EE if the answer to the previous question will be negative. I really hope this won't be the case.



MAURO TRENTINI

Richard Rabins
05-01-2008, 05:16 AM
After reading CALocklin's thread i feel a bit better, I'm not the only one having this issue.


This is a valid point, I totally agree.
When me and 2 collegues tested the program that we developed, we never had indexing issues, but the same program running on a multi-user networked environment (the program is on a shared drive that users can access from their computers) needs database indexes to be refreshed almost every day.
In my case there could be a maximum of 16 people using the program at the same time, but I believe the average is about 8 people using the program simultaneously (2-3 in warehouse, 3-4 customer service, 1-2 finance and myself) and entering data (not just reading).

Also, I just wanted to add a personal thought:
in Alpha5 if you right-click on a database table you can select an operation called "Update indexes". You can't find this or similar operations on "normal" databases like SQL, probabily because there's no need for it.
Therefore, if they created and provided such an operation, it means that the tables' indexes in Alpha5 could be corrupted somehow (and they actually do).

My dilemma is:
is this issue DBF related? and turning into SQL would it fix the issue?

My boss already approved a proposal of re-building the entire program with technologies like .NET or J2EE if the answer to the previous question will be negative. I really hope this won't be the case.



MAURO TRENTINI

Dear Mauro

swithing to A SQL backend vs DBF will eliminate this issue and give you enormous scalability

i dont know if you selwyn's comment on this topic

companies like Microsoft and Oracle have invested millions (possibly billions) to ensure the scaleability, reliability and performance of their sql databases.

if you use sql databases in a5, you get to piggy back on that investment.

while there are cetainly a lot of very impressive applications that have been developed on top of .dbf files, if you want hugely scaleable applications etc. then .dbf files are not recommended.

the great thing about v9 is that you can switch from .dbf files to sql tables incrementally. you can convert individual tables to active link tables as you see fit. the Export operation now allows you to export a .dbf to a sql table so converting an existing table to a sql table is easy.

a) export the data to sql
b) create an active-link to the sql table.

aRob
05-01-2008, 10:52 AM
CALocklin,

If you get frequent corruption when one particular user is active then that should make it easier to isolate your problem. If you have replaced that user's hardware with clean system and the problem still comes up, then some functions or some specific sequence of functions is causing a problem. Spend a day watching everything he/she does. Better yet, have a user, who never or rarely has a problem, watch because differences in procedures will be more obvious to an experienced person.

DaveM
05-01-2008, 11:10 AM
Edit: i looked at all my indexes and I have about 65 indexes over 40 tables. maybe that is why I never have much problems with indexes. Also, anyone on here wants to look at the program, you are welcome to it and the coding will be open(data qill be changed of course) if it is for learning purposes only. I bet you can find areas I could have done better too. I would appreciate any feed back. No programmer can know it all!!!

I take a back seat to Cal any day, But why have I not had index issues while he is having them daily with less indexes than i have and less users? I only showed one table, not the 40 that I use.

I do agree with the Selwyn brothers about sql. i think it is the greatest thing they have done for us in a long time(themselves too). i would switch to sql in a heartbeat, but the runtime pricing makes it prohibitive for my own business. I am talking about the per seat pricing.

It just depends on what you are doing.

mmaisterrena
05-01-2008, 07:33 PM
I think that the "update()" function frequently causes index corruption. But how to "live" without using this function?

Steve Workings
05-01-2008, 10:16 PM
I haven't seen people use the update() function until recently. In fact, it never occurred to do this until I saw it in use recently. The "traditional" method has been like this:


tbl.change_begin()
tbl.somefield = "somevalue"
tbl.change_end(.t.)

mmaisterrena
05-01-2008, 10:33 PM
mmm i think that that one causes problems to indexes on dbf files too..

DaveM
05-01-2008, 10:51 PM
Don't think I have ever used the update function like update(). I have(but seldom) done the



tbl.change_begin()
tbl.somefield = "somevalue"
tbl.change_end(.t.)


I do use the operations to update fields. I guess it is the same thing.

mmaisterrena
05-02-2008, 03:04 AM
i widely use the update() function to update fields from a web application dialog, and yes indexes get corrupted often.

CALocklin
05-02-2008, 03:51 AM
i widely use the update() function to update fields from a web application dialog, and yes indexes get corrupted often.
IS THIS REALLY TRUE? Indexes are getting corrupted often on your WEB application?

If true, that scares the heck out of me. I wasn't having any problem at all because I was using NO indexes, only queries, in my web apps - just in case. However, some recent change (and my pages haven't changed in over a year so it's not my pages) is causing my customers to complain about frequent "Error 500 - could not create temporary files when running query" messages. Similar errors occurred occasionally in the past but now they are frequent. I tracked them down and every one was indeed occuring at a <tbl>.query_create() function. The problems aren't just at one line or even just one page. It seems to be happening all over at random.

I was hoping to add indexes as a workaround since nobody wants to own up to causing the problem. (I think it might have something to do with the servers at the ISP but I'm not sure.) However, if indexes can get messed up on the WAS, I'll have to tell customers they will have to tough it out - I prefer a minor inconvenience (click the 'Back' button and try again) over corrupted data because of index problems.

DaveM
05-02-2008, 10:26 AM
doesn't a query get added to the index and if so, shouldn't it be deleted after it is run? Can't you create a temporary query that never goes to the index?

Sounds like the update() function and how it is used needs to be invetigated.

mmaisterrena
05-02-2008, 11:31 AM
If you don't have any indexes then theres nothing to corrupt.
so you don't seem to have a problem

CALocklin
05-02-2008, 12:32 PM
doesn't a query get added to the index and if so, shouldn't it be deleted after it is run? Can't you create a temporary query that never goes to the index?

Sounds like the update() function and how it is used needs to be invetigated.
A query never gets "added" to the index. Some queries will use an index if one is available but I didn't have any indexes to start with.

A "temporary" query in Alpha terms is only temporary in the sense that "Alpha Five will automatically delete this query if necessary if building a new query would cause it to run out of query slots. there are a maximum of 16 query slots."

The only other way that I know of to make a query temporary would be to create it with a pointer to the query then "drop" the query when done. Something like this:
ptr = <tbl>.query_create()
... Do whatever needs to be done.
ptr.drop()

From the Help files:
The <QUERY>.DROP() method deletes the query list file ($$*.MPX) referenced by the <QUERY> object pointer.

FWIW: I even created all my queries with the "MNT" options which tells them NOT to use any indexes even if they are available, NOT to reuse an existing query even if the definition of an existing query matches exactly, and to make it a temporary query as defined above.

HMMMMM: As I was adding the info about the <QUERY>.DROP() method it occurred to me that I might try that in my web pages. Maybe the "can't create temporary file" error is happening because of a timing issue that isn't allowing the "temporary" file to get dropped soon enough when attempting to add another query.

Finian Lennon
05-02-2008, 12:47 PM
I'd say that it is essential to use the idx = t.query_create(). idx.drop() construct for all queries. If you don't one of two things is going to happen (1) after some time you will get a "maximum indexes attached error" or (2) you will have problems with temp files. These problems are especially acute when running on servers such as under Terminal Services or Citrix. There might be a parallel there with web-based apps.

I can't say that by always dropping queries all of our problems with queries and temp files have been completely eliminated. I do know they have been drastically reduced.

CALocklin
05-02-2008, 02:14 PM
I'd say that it is essential to use the idx = t.query_create(). idx.drop() construct for all queries. If you don't one of two things is going to happen (1) after some time you will get a "maximum indexes attached error" or (2) you will have problems with temp files. These problems are especially acute when running on servers such as under Terminal Services or Citrix. There might be a parallel there with web-based apps.

I can't say that by always dropping queries all of our problems with queries and temp files have been completely eliminated. I do know they have been drastically reduced.
I suspect that in reality you may be correct but have you checked out the "T" option for queries:

T -- Temporary query. Alpha Five will automatically delete this query if necessary if building a new query would cause it to run out of query slots. There are a maximum of 16 query slots.

According to that Help text, if all the queries were created with the "T" option, they (or at least one of them) should get dropped automatically when query #17 is created. Since the users who have problems can simply push the Back button and it always works the 2nd time around, I suspect there is a timing issue. Hence the HMMMM in my previous post. Although I don't use this technique on the desktop very often, I've never had a problem when I did. However, I think I'll use the "drop" method from now on. That should eliminate the chances for most timing issues.

I'm going to change my web pages soon. I was going to start earlier but discovered that I forgot to move them to my new computer. I'll post the results as soon as the job is done and we've had time to get feedback from users - probably won't get much feedback until next Tuesday or Wednesday.