OnRun() timProgress := TIME; tblRcdCount := 0; Window.OPEN(Text001,rTableList.TableName,rTableList.FieldName,tblRcdCount); rTableList.RESET; IF rTableList.FIND('-') THEN REPEAT tblRenameKey := FALSE; CLEAR(tblRcdCount); Window.UPDATE; CLEAR(rFieldList); rFieldList.SETRANGE(TableName,rTableList.TableName); rFieldList.SETRANGE(FieldName,rTableList.FieldName); IF rFieldList.FIND('-') THEN BEGIN tblTableRef.OPEN(rFieldList.TableNo,FALSE); tblKeyFields := tblTableRef.CURRENTKEY; IF STRPOS(tblKeyFields,rTableList.FieldName) > 0 THEN tblRenameKey := TRUE; CLEAR(tblKeyFields); IF tblTableRef.FIND('-') THEN REPEAT tblRcdCount := tblRcdCount + 1; tblFieldRef := tblTableRef.FIELD(rFieldList."No."); OldID := tblFieldRef.VALUE; rIDList.RESET; rIDList.SETCURRENTKEY("OLD User ID"); rIDList.SETRANGE("OLD User ID",OldID); IF rIDList.FIND('-') THEN REPEAT IF OldID <> rIDList."New User ID" THEN IF NOT tblRenameKey THEN BEGIN tblTableRef2 := tblTableRef.DUPLICATE; tblFieldRef2 := tblTableRef2.FIELD(rFieldList."No."); tblFieldRef2.VALUE := rIDList."New User ID"; tblTableRef2.MODIFY(FALSE); END ELSE BEGIN tblTableRef2 := tblTableRef.DUPLICATE; tblFieldRef2 := tblTableRef2.FIELD(rFieldList."No."); tblTableRef2.DELETE(FALSE); tblFieldRef2.VALUE := rIDList."New User ID"; tblTableRef2.INSERT(FALSE); END; CLEAR(tblFieldRef2); CLEAR(tblTableRef2); IF timProgress < TIME - 3000 THEN BEGIN timProgress := TIME; Window.UPDATE; COMMIT; END; UNTIL rIDList.NEXT = 0; CLEAR(tblFieldRef); CLEAR(rIDList); UNTIL tblTableRef.NEXT = 0; END; CLEAR(tblTableRef); UNTIL rTableList.NEXT = 0; Window.CLOSE;
Comments
Lets first try to loop all records BUT WITHOUT making changes, so put the following code in comment: If this works the problem is somewhere in here.
In this case : try to put the code in a function and make variables "tblTableRef2","tblFieldRef2" local to the function.
No PM,please use the forum. || May the <SOLVED>-attribute be in your title!
Thanks again.
I commented out those lines, as suggested. However, the memory usage still continued to grow and grow. I did notice more memory "fall back", but I don't know for sure if it did actually happen more often.
It clearly still wasn't giving back the resources. Once the function completed, the memory usage dropped down really low again.
Any other thoughts? Is there something that I am not releasing?
Thanks
Server - 3.70.B
Client - 3.60
[edit] NM I just saw the commit statement.
Independent Consultant/Developer
blog: https://dynamicsuser.net/nav/b/ara3n
This was apparently solved by explicitly setting the "Commit Cache" to "Yes" (it was previously the default "<Yes>").
However, I cannot see this setting either via the options or the File>Database>Information form.
Could this be because we are running Navision over SQL Server? If so, then presumably this is a "red herring"?
Thanks
Server - 3.70.B
Client - 3.60
Try running this at Server. I know this won't help the memory issue, but it will improve performance.
Independent Consultant/Developer
blog: https://dynamicsuser.net/nav/b/ara3n
-don't use the dialogbox
- put the following code in a function (or even a new codeunit)and use only local variables
No PM,please use the forum. || May the <SOLVED>-attribute be in your title!
\:D/ WOW! How strange. I moved all the code from within the first loop into a separate function with local variables, and it has made the world of difference.
So, even though I was performing a CLEAR on all variables where possible, it still wasn't letting them go completely? I assume that by putting the code into a function it forces it to kill them off each time.
Now to test it in the full size database on the server. Hopefully I will be able to do this tonight.
:-k Just a thought, but would I be any better off pulling out code from the other loops into separate functions? Or should this not be necessary?
Many Thanks
Simon
Server - 3.70.B
Client - 3.60
RIS Plus, LLC
RecordRef datased must be closed after usage and before a new OPEN...
Call CLOSE method of recordref after:
UNTIL tblTableRef.NEXT = 0;
Using an external function resolve your issue because you have declared this recordset as local and navision automatically freeze a local variable when exiting from a function...
Matteo
So the CLOSE will free up the memory? Well, no point changing it now. But I will bear it in mind. Thank you.
Not yet been able to test on full size database server side. Will update as 'solved' once I have done so.
Many thanks,
Server - 3.70.B
Client - 3.60
Independent Consultant/Developer
blog: https://dynamicsuser.net/nav/b/ara3n
Got to test it on a copy of the live system. Therefore against full size tables. Could see a change in memory usage, but was still experiencing problems. Could only assume that this was related to the size of the tables, as it worked fine on my local copy.
I made further changes to the code to try and deal with this problem. Could then see major changes to the memory usage on my local database. This amended code was then run against a fresh copy of the live system at the weekend. This ran through fine with no problems.
Did a data test this morning, and there are many IDs that have not been updated. There seems to be no pattern to how this has happened. I could have a table where x amount of records for a particular user have not been updated, but in the same table & for the same user y amount of records have been updated!?
I tried copying one of these tables over to my local database, and re-ran the code against that one table. All records were successfully updated!
I am now really confused as to what is happening, and I am getting very close to my deadline for when it is required.
Is the code skipping records for some reason?
If it is, then why is this not happening on the smaller, local database?
Please find a copy of the new code below. Does anyone have any ideas what might be going wrong?
Many Thanks,
Simon
Server - 3.70.B
Client - 3.60
A couple of weird points though:
1) For one of the tables in question, I copied the table across to local DB and all were updated OK.
2) For some tables it misses a few records, for others it can miss a few hundred.
Still very, very confused! :-k
Server - 3.70.B
Client - 3.60
On further investigation I found a link between the tables regarding the records that weren't updated... all of these records were at the end of the tables. Therefore the update was getting so far through the table, and then stopping.
Again, looking through this further, I ran a SQL query against one of the tables to get a record count, then I ran a COUNT against the table in Navision.... BINGO!!! Different results for some bizarre reason.
Sticking within Navision I then found the following:
1) The following code does not return the correct record count:
2) The following code does return the correct record count:
So, in actual fact an unfiltered table has less records than a filtered table!
Am I alone in thinking that this is a tiny bit unusual?
Has anyone come across this before?
Do you know what causes this?
This does not appear to be an issue with every table in every database. Other tables within this database were fully updated. Also, my local database updated every table successfully.
Any further help would be greatly appreciated.
Thanks,
Simon
BTW, as this is actually a separate issue, I will post as a new topic, but maybe with a link to here for a bit of background.
Server - 3.70.B
Client - 3.60