What impact does my C/AL have on SQL?

WaldoWaldo Member Posts: 3,412
edited 2015-03-06 in SQL General
Source: my blog.

I wrote a little something about the FIND statements and such ... could be nice for reading on a Sunday evening :wink: . For the better edited version, go to my blog (the tables that I used are a pain in the ass here :().

This is a very challenging blogpost in my opinion. I see many stories going around on what would be the best way to use the new FIND-instructions, what C/AL to write in what case. I have my own thoughts on this which I would like to share. I don't want to say the following is the best way to go, let's just say it's a good way to go .

First question (and I think the most important question that is on people's mind), how to use the FIND statements. This is not easy, because using the FIND statement really depends on the type of loop you're using.

First of all the basics. I think the table from stryk explains it very well:
-----------------------------------------------------
Customer.SETRANGE("Country Code", 'US'); 
Customer.FIND('-');

 SELECT *,DATALENGTH("Picture") 
FROM "dbo"."Cronus$Customer" 
WITH (READUNCOMMITTED) 
WHERE (("Country Code"='US')) 
ORDER BY "No_"
-----------------------------------------------------
Customer.SETRANGE("Country Code", 'US');
Customer.FINDFIRST;

 SELECT TOP 1 *,DATALENGTH("Picture") 
FROM "dbo"."Cronus$Customer" 
WITH (READUNCOMMITTED) 
WHERE (("Country Code"='US')) 
ORDER BY "No_"
-----------------------------------------------------
Customer.SETRANGE("Country Code", 'US'); 
Customer.FIND('+');

 SELECT *,DATALENGTH("Picture") 
FROM "dbo"."Cronus$Customer" 
WITH (READUNCOMMITTED) 
WHERE (("Country Code"='US')) 
ORDER BY "No_" DESC
-----------------------------------------------------
Customer.SETRANGE("Country Code", 'US'); 
Customer.FINDLAST;

 SELECT TOP 1 *,DATALENGTH("Picture") 
FROM "dbo"."Cronus$Customer" 
WITH (READUNCOMMITTED) 
WHERE (("Country Code"='US')) 
ORDER BY "No_" DESC
-----------------------------------------------------
Customer.SETRANGE("Country Code", 'US'); 
Customer.FINDSET;

 SELECT TOP 500 *,
DATALENGTH("Picture") 
FROM "dbo"."Cronus$Customer" 
WITH (READUNCOMMITTED) 
WHERE (("Country Code"='US')) 
ORDER BY "No_"
-----------------------------------------------------
Customer.SETRANGE("Country Code", 'US'); 
Customer.FINDSET(TRUE);

 SET TRANSACTION ISOLATION LEVEL 
SERIALIZABLE 
SELECT *,DATALENGTH("Picture") 
FROM "dbo"."Cronus$Customer" 
WITH (UPDLOCK) 
WHERE (("Country Code"='US')) 
ORDER BY "No_"
-----------------------------------------------------
Customer.SETRANGE("Country Code", 'US'); 
IF Customer.ISEMPTY THEN;

 SELECT TOP 1 NULL 
FROM "dbo"."Cronus$Customer" 
WITH (READUNCOMMITTED) 
WHERE (("Country Code"='US'))
-----------------------------------------------------
Customer.SETRANGE("Country Code", 'US'); 
IF Customer.COUNT <> 0 THEN;

 SELECT COUNT(*) 
FROM "dbo"."Cronus$Customer" 
WITH (READUNCOMMITTED) 
WHERE (("Country Code"='US'))
-----------------------------------------------------
Customer.SETRANGE("Country Code", 'US'); 
IF Customer.COUNTAPPROX <> 0 THEN;

 SET SHOWPLAN_ALL ON 
SELECT * 
FROM "dbo"."Cronus$Customer" 
WITH (READUNCOMMITTED) 
WHERE (("Country Code"='US'))
----------------------------------------------------- 
GLEntry.LOCKTABLE; 
GLEntry.FIND('+');

 SET TRANSACTION ISOLATION LEVEL
SERIALIZABLE 
SELECT * 
FROM "dbo"."Cronus$G_L Entry" 
WITH (UPDLOCK) 
ORDER BY "Entry No_" DESC
----------------------------------------------------- 
GLEntry.LOCKTABLE; 
GLEntry.FINDLAST;

 SET TRANSACTION ISOLATION LEVEL SERIALIZABLE 
SELECT TOP 1 * 
FROM "dbo"."Cronus$G_L Entry" 
WITH (UPDLOCK) 
ORDER BY "Entry No_" DESC

A ground rule might be: try to fetch as less data as needed - but all data you need - in one server call. E.g. Don't fetch all records if you're not going to loop those records / Don't fetch data if you just want to know if a record is present for that range or not, ... .

The table above is nice, but it still doesn't make me think about the things I have to take in count when writing code. Let me try to help you with this by asking a few basic questions:

WHEN NOT LOOPING DATA

Am I going to change data?
Suppose you write this code:
recCust.GET('10000'); 
recCust.VALIDATE(Name, 'Van Terp Kantoorinrichting'); 
recCust.MODIFY(TRUE);

Not really exciting. In fact, I see code like this going around quite often. Well, the problem with this is that you do an extra server call, because you didn't apply that you wanted to change data. In fact, you'll see this in the client monitor:
----------------------------------------------------- 
recCust.GET('10000');
 SELECT *,DATALENGTH("Picture") 
FROM "CRONUS BELGIË NV$Customer"
WITH (READUNCOMMITTED) 
WHERE "No_"='10000' 
 Read the data without setting a lock (you didn't specify this)
-----------------------------------------------------
recCust.VALIDATE(...); 
recCust.MODIFY(TRUE);
 SELECT *,DATALENGTH("Picture") FROM "CRONUS BELGIË NV$Customer" WITH (UPDLOCK, REPEATABLEREAD) WHERE "No_"='10000' 

UPDATE "CRONUS BELGIË NV$Customer" WITH (REPEATABLEREAD) SET ...
 Read the data again and place a lock.

Update the fields.
-----------------------------------------------------

Two times the SELECT is overkill, and must be avoided. And avoiding it is simple. Specifying that you want to change the data can be done by "LOCKTABLE". This makes our small piece of code like:
recCust.LOCKTABLE; 
recCust.GET('10000'); 
recCust.VALIDATE(Name, 'Van Terp Kantoorinrichting'); 
recCust.MODIFY(TRUE);
To make the comparison complete, here is what happens behind the scenes:
----------------------------------------------------- 
recCust.LOCKTABLE;
recCust.GET('10000');
 SELECT *,DATALENGTH("Picture") FROM "CRONUS BELGIË NV$Customer" WITH (UPDLOCK, REPEATABLEREAD) WHERE "No_"='10000' 
 Read the data with setting a lock
----------------------------------------------------- 
recCust.VALIDATE(...); 
recCust.MODIFY(TRUE);
 UPDATE "CRONUS BELGIË NV$Customer" WITH (REPEATABLEREAD) SET ...
-----------------------------------------------------

This being said, take a look at following example:
IF ICHandledInboxTransaction.FIND('-') THEN BEGIN 
  ICHandledInboxTransaction.LOCKTABLE; 
  ICHandledInboxTransaction.Status := ICHandledInboxTransaction.Status::Posted; 
  ICHandledInboxTransaction.MODIFY; 
END;

This is code from default NAV (earlier version) which could be replaced by (to not change the behaviour on Native):
IF RECORDLEVELLOCKING THEN 
  ICHandledInboxTransaction.LOCKTABLE; 
IF ICHandledInboxTransaction.FINDFIRST THEN BEGIN 
  ICHandledInboxTransaction.LOCKTABLE; 
  ICHandledInboxTransaction.Status := ICHandledInboxTransaction.Status::Posted; 
  ICHandledInboxTransaction.MODIFY; 
END;

The statement "RECORDLEVELLOCKING" is the way for checking in code whether you're working on SQL Server (TRUE) or Native (FALSE). Furthermore, I added these two lines of code because it's necessary to lock before we read (to send the UPDLOCK in SQL).

Does a record exist?
There are two (good) ways to check whether a record is present for a certain range:

Using FINDFIRST: Use this statement when you need the data of the first record, and the first record only.
Using ISEMPTY: this is a somewhat forgotten statement, but it's lighter then the FINDFIRST. If you only want to check if a record exists in the filtered range, but you don't need any value of a field of any of the records, use this statement.
See table above for what SQL Statement it produces (TOP 1 * or TOP 1 NULL).

For example:
IF SalesLine.FIND('-') THEN 
  Cust.CheckBlockedCustOnDocs(Cust,"Document Type",TRUE,TRUE);
Could be replaced by
IF NOT SalesLine.ISEMPTY THEN 
  Cust.CheckBlockedCustOnDocs(Cust,"Document Type",TRUE,TRUE);
WHEN LOOPING DATA

That wasn't that exciting, was it? But what if we want to loop data. That's a whole different story .

First thing that I have to state is that NAV uses cursors to simulate native behaviour. The new FIND statements (from 4.0) are the first step to go around this native behaviour.

Looping without changing data

In Stryk's tabel above, you can see the difference on SQL level between FIND('-'), FINDFIRST and FINDSET. I'll try to make the comparison of what happens on SQL Server:
-----------------------------------------------------
IF recCust.FINDFIRST THEN 
REPEAT 
UNTIL recCust.NEXT = 0;

SELECT TOP 1 *,DATALENGTH("Picture") FROM "CRONUS BELGIË NV$Customer" WITH (READUNCOMMITTED) ORDER BY "No_" 

SELECT *,DATALENGTH("Picture") FROM "CRONUS BELGIË NV$Customer" WITH (READUNCOMMITTED) WHERE "No_">'01121212' ORDER BY "No_" 

FETCH 5 
FETCH 20 
FETCH 40 
FETCH 40
-----------------------------------------------------
IF recCust.FINDSET THEN 
REPEAT 
UNTIL recCust.NEXT = 0;

SELECT TOP 501 *,DATALENGTH("Picture") FROM "CRONUS BELGIË NV$Customer" WITH (READUNCOMMITTED) ORDER BY "No_" 
-----------------------------------------------------
IF recCust.FIND('-') THEN 
REPEAT 
UNTIL recCust.NEXT = 0;

SELECT *,DATALENGTH("Picture") FROM "CRONUS BELGIË NV$Customer" WITH (READUNCOMMITTED) ORDER BY "No_" 

FETCH 5 
FETCH 20 
FETCH 40 
FETCH 40
-----------------------------------------------------
What do we learn from this:

The best way to go is using the FINDSET. Why? It's fetching a SET of records and will be working with this set all the way. You see that there is no cursor activity (no FETCH statements). If you read further, you'll see that this is not always the case!
FINDFIRST is worse then FIND('-') when looping records because of the extra call (TOP 1 *). NAV does not expect a loop after a FINDFIRST. If it sees that you are looping, it's doing a new SELECT * (same as FIND('-') statement) to be ready for the loop and working with cursors to loop through the data (therefore the FETCH statements). Therefore, please do not just replace FIND('-') with FINDFIRST.
Never blindly replace FIND('-') by FINDFIRST. Think of what you're doing.
Now, what about the number of records? Does that count?

Well, NAV behaves like this: if the set is bigger then 500 records (or the size you specified as "Record Set" value in the "Alter Database" window), it will not use the recordset, but is going to "cursor" it's way through the rest of the loop, like "old school". I'll try to show it using tho examples. In both of them, the code runs 2840 times through the loop (or in other words, the messages shows "2840").
-----------------------------------------------------
i := 0; 
IF recGL.FIND('-') THEN 
REPEAT 
  i := i + 1; 
UNTIL recGL.NEXT = 0; 
MESSAGE('%1',i);


SELECT * FROM "CRONUS BELGIË NV$G_L Entry" WITH (READUNCOMMITTED) ORDER BY "Entry No_" 

FETCH 5 
FETCH 20 
FETCH 60 
FETCH 60 
FETCH 60 

. 
. 
. 

A total of 2845 "FETCHes"
----------------------------------------------------- 
i := 0; 
IF recGL.FINDSET THEN 
REPEAT 
  i := i + 1; 
UNTIL recGL.NEXT = 0; 
MESSAGE('%1',i);

SELECT TOP 501 * FROM "CRONUS BELGIË NV$G_L Entry" WITH (READUNCOMMITTED) ORDER BY "Entry No_" 

SELECT * FROM "CRONUS BELGIË NV$G_L Entry" WITH (READUNCOMMITTED) WHERE "Entry No_">501 ORDER BY "Entry No_" 

FETCH 5 
FETCH 20 
FETCH 60 
FETCH 60 
FETCH 60 
. 
. 
. 

A total of 2365 "FETCHes"
-----------------------------------------------------
In case of the FINDSET: A first impression shows you the when the resultset is bigger than 500 records, it just throws an extra database call without the "TOP 500" and is going to cursor it's way through the records. When you examine deeper, you conclude that it's actually looping through the recordset of 500 records, and after that, NAV does not call the next 500 records, but just trows a "SELECT *" and is going to cursor it's way through the rest of the records.

That's why with FINDSET, we have about 500 FETCHes less than FIND('-'). BUT, we have an extra database call, which is quite "heavy" (a "SELECT *" is heavier than a "FETCH 60").

It's hard to discuss what's best. Further benchmarking should point that out. I would bet my money on using FINDSET in all cases. Why?

You avoid about 10 times a FETCH x call
A "SELECT TOP 500" as extra database call doesn't seem that heavy for me.
If anyone is interested in doing a thorough investigation on this ... Please count me in or let me know the results.

What do we learn from this:

Using FINDSET in front of every loop, is a pretty sure bet.
Using FINDSET only gets the first 500 records as a set, but is going to cursor it's way through the rest of the records like FIND('-').
Looping with changing data

The same rules apply like above, only see if you're locking the data before you're reading it. You can do this by either using LOCKTABLE, like:
recGL.LOCKTABLE; 
IF recGL.FINDSET THEN 
REPEAT 
... 
UNTIL recGL.NEXT = 0;

Or, if using the FINDSET statement, you can use its parameter like:
IF recGL.FINDSET(TRUE) THEN 
REPEAT 
... 
UNTIL recGL.NEXT = 0;

There is also an option FINDSET(TRUE, TRUE). Only use this if you are going to modify any field value within the current key. Now, I really don't recommend to use this. You have to be aware for the "NEXT FROM HELL" (like Hynek likes to call it).

The "Next from hell" can happen when you filter on a value, and in a loop, you're changing that value. The loop jumps to places that are hard to predict. Also, when you change a key value in a loop (even without modify), a "NEXT" can give unexpected results. Try to avoid these situations by just declaring a new variable, and modify in that variable. For example, this is looking for trouble:
recCust.SETRANGE("Country/Region Code", 'BE'); 
IF recCust.FINDSET(TRUE) THEN 
REPEAT 
  recCust.VALIDATE("Country/Region Code", 'US'); 
  recCust.MODIFY; 
UNTIL recCust.NEXT = 0;

You're modifying the Country Code while you're filtered on that field. Try to use a seperate record variable, and leave your looping variable intact, like:
recCust.SETRANGE("Country/Region Code", 'BE'); 
IF recCust.FINDSET(TRUE) THEN 
REPEAT 
  recCust2 := recCust; 
  recCust2.VALIDATE("Country/Region Code", 'US'); 
  recCust2.MODIFY; 
UNTIL recCust.NEXT = 0;

Conclusion

When I started writing this article, I didn't expect that it was so hard to put this subject into words. I hope the above is clear to you, but to be sure, I just wanted to give you an overview of the things mentioned above.

Not looping data
Modifying data
Use LOCKTABLE with GET or FINDFIRST
Not modifying data (usually to check if a record exists or not)
Use FINDFIRST if you need the data of the record further down the road
Use ISEMPTY if you do not need the values of the keys
Looping data
Not modifying data
Use loop like: IF FINDSET THEN REPEAT UNTIL NEXT=0;
Modifying data: see that you lock the records that you read.
Use loop like: IF FINDSET(TRUE) THEN REPEAT UNTIL NEXT=0;
Mind the "NEXT FROM HELL" !
So, always think what the code is doing, then make up your mind what you will need for most optimal performance on SQL Server.

Writing this article, I used the SQL Profiler and Client monitor to monitor what my code is doing. I noticed very few people know what this client monitor can do, so who knows, there might be an upcoming blog about monitoring your code .

Any comments are appreciated!

Eric Wauters
MVP - Microsoft Dynamics NAV
My blog

Comments

  • jlandeenjlandeen Member Posts: 524
    First - this and a few of the other similar posts have been excellent to see on here. I definately wish more developers followed some of these simple rules to write well performing code - so good job!

    Second - a releated question. When applying filters and ranges to a recordset in SQL I've always been told that you should apply the filters in order of the key to ensure that the underlying navision engine translates the call correctly to SQL. Is this infact true? If so why?

    Here's an example of what I mean. If you have a key on Item Ledger (Table 32) of "Item No.","Location Code","Posting Date" - and you wanted all records from location 'BLUE' on January first. I've always followed the practice of coding the following:
    WITH lrItemLedgerEntry DO BEGIN
      SETCURRENTKEY("Item No.","Location Code","Posting Date");
      SETRANGE("Item No.");
      SETRANGE("Location Code",'BLUE');
      SETRANGE("Posting Date",010108D);
      IF FINDSET THEN
        <some more code here to use the found recordset>
    END;
    

    So in summary, if running on SQL does the order that I apply my filters really make a difference to the SQL code sent to the SQL server? Also what performance implications/problems could there be?
    Jeff Landeen - Sr. Consultant
    Epimatic Corp.

    http://www.epimatic.com
  • AngeloAngelo Member Posts: 180
    Great, Waldo....=D>

    you unveil codes behind CAL when using SQL. I have a question regard to "locking". you said that every we "GET" data and Modified data when looking, we use rec.LOCKTABLE. if we always use this when read and modify data, is it true that we will have blocking or maybe deadlock problem? especially if the modified data is very big and there a lot of users modified the table.


    regards,
    Angelo
  • WaldoWaldo Member Posts: 3,412
    Off course.
    Locking is a normal behaviour. you can't avoid locking when you're modifying data. You just have to see you don't lock to many records, plus you have to try to lock always in the same order ... (to avoid deadlocks).

    Eric Wauters
    MVP - Microsoft Dynamics NAV
    My blog
  • WaldoWaldo Member Posts: 3,412
    jlandeen wrote:
    First - this and a few of the other similar posts have been excellent to see on here. I definately wish more developers followed some of these simple rules to write well performing code - so good job!

    Second - a releated question. When applying filters and ranges to a recordset in SQL I've always been told that you should apply the filters in order of the key to ensure that the underlying navision engine translates the call correctly to SQL. Is this infact true? If so why?

    Here's an example of what I mean. If you have a key on Item Ledger (Table 32) of "Item No.","Location Code","Posting Date" - and you wanted all records from location 'BLUE' on January first. I've always followed the practice of coding the following:
    WITH lrItemLedgerEntry DO BEGIN
      SETCURRENTKEY("Item No.","Location Code","Posting Date");
      SETRANGE("Item No.");
      SETRANGE("Location Code",'BLUE');
      SETRANGE("Posting Date",010108D);
      IF FINDSET THEN
        <some more code here to use the found recordset>
    END;
    

    So in summary, if running on SQL does the order that I apply my filters really make a difference to the SQL code sent to the SQL server? Also what performance implications/problems could there be?

    Well,
    I simply tested this out with the client monitor.
    I took your code:
    WITH lrItemledgerEntry DO BEGIN 
      SETCURRENTKEY("Item No.","Location Code","Posting Date"); 
      SETRANGE("Posting Date",010108D);
      SETRANGE("Location Code",'BLUE');
      SETRANGE("Item No."); 
      IF FINDSET THEN;
        //<some more code here to use the found recordset>
    END;
    
    and
    WITH lrItemledgerEntry DO BEGIN 
      SETCURRENTKEY("Item No.","Location Code","Posting Date");
      SETRANGE("Location Code",'BLUE');
      SETRANGE("Posting Date",010108D);
      SETRANGE("Item No."); 
      IF FINDSET THEN;
        //<some more code here to use the found recordset>
    END;
    

    Every time,I changed order, I got the same SQL code... so my conclusion would be: don't matter.

    Eric Wauters
    MVP - Microsoft Dynamics NAV
    My blog
  • krikikriki Member, Moderator Posts: 9,115
    I also noted that Navision puts the fields in the order of the SETCURRENTKEY. I don't know about the other of filters on other fields.
    Regards,Alain Krikilion
    No PM,please use the forum. || May the <SOLVED>-attribute be in your title!


  • WaldoWaldo Member Posts: 3,412
    Almost forgot to add it on Mibuso (sorry ... :oops: ). The second post about this topic (you can read a good edited version on my blog), but, as you already can expect from me, I added the text for search reasons here as well :wink: .
    Enjoy!

    Time for my kids to put asleep a bit earlier, so that I have some time to talk about some other statements. This time, I will try to go deeper into the MODIFYALL- and the DELETEALL-statement. Hope you enjoy it . I know I will (don't know where I will end up ).

    MODIFYALL

    Good to know about the MODIFYALL statement is that the OnValidate trigger is a "no-go". You won't be able to call the OnValidate trigger of the fields you're modifying. This is very important to know, off course: Only use the MODIFYALL if no OnValidate-trigger is necessary (OK, I said it three times now in three sentences in three ways ).

    The OnModify trigger is possible, even with the MODIFYALL statement.

    First: a MODIFYALL without the OnModify-Trigger:
    recCust.SETRANGE(...); 
    recCust.MODIFYALL(Name,'waldo',FALSE);
    ======================
    UPDATE "CRONUS BELGIË NV$Customer" 
    SET "Name"='waldo' 
    WHERE (("Language Code"='ENC'))
    ======================
    Only one UPDATE statement is produced.
    
    So, what is the loop variant?
    recCust.SETRANGE(...); 
    IF recCust.FINDSET(TRUE,FALSE) THEN 
    REPEAT 
    recCust.Name := 'waldo'; 
    recCust.MODIFY(FALSE); 
    UNTIL recCust.NEXT = 0;
    ======================
    SELECT *,DATALENGTH("Picture") FROM "CRONUS BELGIË NV$Customer" WITH (UPDLOCK) WHERE (("Language Code"='ENC')) ORDER BY "No_" 
    
    UPDATE "CRONUS BELGIË NV$Customer" WITH (REPEATABLEREAD) SET "Klantengroep"=0 WHERE ("No_"='01905893') 
    
    UPDATE "CRONUS BELGIË NV$Customer" WITH (REPEATABLEREAD) SET "Klantengroep"=0 WHERE ("No_"='01905899') 
    
    UPDATE "CRONUS BELGIË NV$Customer" WITH (REPEATABLEREAD) SET "Klantengroep"=0 WHERE ("No_"='01905902') 
    
    ...
    ======================
    First a select statement for the FIND. 
     
    For every MODIFY in the loop, an updatestatement is called
    

    It is easy to see what statement is best to use in this case: The MODIFYALL, off course. There is only one statement to the server, and all records are modified. Now, what about updating more than one field (Like Erik P. Ernst asked as comment on my previous post)? Let's take similar code for that. I'll update three fields:
    recCust.SETRANGE(...); 
    recCust.MODIFYALL(Name,'waldo',FALSE); 
    recCust.MODIFYALL("Name 2",'Waldo',FALSE); 
    recCust.MODIFYALL("Search Name",'WALDO',FALSE);
    ======================
    UPDATE "CRONUS BELGIË NV$Customer" SET "Name"='waldo' WHERE (("Language Code"='ENC')) 
    
    UPDATE "CRONUS BELGIË NV$Customer" SET "Name 2"='Waldo' WHERE (("Language Code"='ENC')) 
    
    UPDATE "CRONUS BELGIË NV$Customer" SET "Search Name"='WALDO' WHERE (("Language Code"='ENC'))
    ====================== 
    For every MODIFYALL statement, one UPDATE statement is generated.
    
    The looping variant:
    recCust.SETRANGE(...); 
    IF recCust.FINDSET(TRUE,FALSE) THEN 
    REPEAT 
    recCust.Name := 'waldo'; 
    recCust."Name 2" := 'Waldo'; 
    recCust."Search Name" := 'WALDO'; 
    recCust.MODIFY(FALSE); 
    UNTIL recCust.NEXT = 0;
    ======================
    SELECT TOP 19 *,DATALENGTH("Picture") 
    FROM "CRONUS BELGIË NV$Customer" WITH (UPDLOCK) 
    WHERE (("Language Code"='ENC')) ORDER BY "No_" 
    
    UPDATE "CRONUS BELGIË NV$Customer" WITH (REPEATABLEREAD) 
    SET "Name"='waldo',"Search Name"='WALDO',"Name 2"='Waldo' 
    WHERE ("No_"='01905893') 
    
    UPDATE "CRONUS BELGIË NV$Customer" WITH (REPEATABLEREAD) 
    SET "Name"='waldo',"Search Name"='WALDO',"Name 2"='Waldo' 
    WHERE ("No_"='01905899') 
    
    UPDATE "CRONUS BELGIË NV$Customer" WITH (REPEATABLEREAD) 
    SET "Name"='waldo',"Search Name"='WALDO',"Name 2"='Waldo' 
    WHERE ("No_"='01905902')
    ======================
    For every MODIFY an UPDATE statement, no matter how many fields are updated.
    

    Remark:
    It will only put those fields in the statements that were actually updated!

    You see that the MODIFY and the MODIFYALL statements are behaving a little bit different. The MODIFYALL does not group the changes of the fields into one statement, while the MODIFY-statement does do that. Easy to remember: every MODIFY and MODIFYALL produces a servercall.

    What do we learn from this?

    What's best to use (when no Modify-trigger has to be executed): It actually depends on the number of records you expect in your range:
    If you expect a huge number of records in the range and only a few fields to update: use the MODIFYALL
    If you expect only a few records, less than the number of fields to update, you might want to go for the MODIFY in the loop.
    Note that the loop always produces a SELECT statement. Be sure to send your lock together with this statement by using FINDSET(TRUE,...) or LOCKTABLE (see previous post for more info).

    Second: A MODIFYALL with the OnModify-trigger

    So tell me, how many times do you NOT want to call the MODIFY trigger? Not too many, do you? Let's see what happens when we do want that the OnModify trigger is called. Obviously, I will take the same code as above, but change the parameter:
    recCust.SETRANGE(...); 
    recCust.MODIFYALL(Name,'waldo',TRUE);
    ======================
    SELECT *,DATALENGTH("Picture") FROM "CRONUS BELGIË NV$Customer" WITH (UPDLOCK) WHERE (("Language Code"='ENC')) ORDER BY "No_" 
    
    UPDATE "CRONUS BELGIË NV$Customer" WITH (REPEATABLEREAD) SET "Name"='waldo' WHERE ("No_"='01905893') 
    
    . 
    
    (OnModify Trigger Business Logic) 
    
    . 
    
    UPDATE "CRONUS BELGIË NV$Customer" WITH (REPEATABLEREAD) SET "Name"='waldo' WHERE ("No_"='01905899') 
    
    . 
    
    (OnModify Trigger Business Logic) 
    
    .
    ======================
    Instead of only one update statemens, I had (for this example) 120 SQL statements produced.
    

    You can clearly see that the OnModify trigger was called. To be able to do this, the update isn't combined in one anymore. In fact, above result is exactly the same as doing all this in a loop:

    The SELECT to range the loop (off course with the UPDLOCK)
    An update, ranged on the primary key
    So, if we do the same thing in a loop in stead of the MODIFYALL, you get:
    recCust.SETRANGE(...); 
    IF recCust.FINDSET(TRUE,FALSE) THEN 
    REPEAT 
    recCust.Name := 'waldo'; 
    recCust.MODIFY(TRUE); 
    UNTIL recCust.NEXT = 0;
    ======================
    SELECT *,DATALENGTH("Picture") FROM "CRONUS BELGIË NV$Customer" WITH (UPDLOCK) WHERE (("Language Code"='ENC')) ORDER BY "No_" 
    
    UPDATE "CRONUS BELGIË NV$Customer" WITH (REPEATABLEREAD) SET "Name"='waldo',"Search Name"='WALDO' WHERE ("No_"='01905893') 
    
    . 
    
    (OnModify Trigger Business Logic) 
    
    . 
    
    UPDATE "CRONUS BELGIË NV$Customer" WITH (REPEATABLEREAD) SET "Name"='waldo',"Search Name"='WALDO' WHERE ("No_"='01905899') 
    
    . 
    
    (OnModify Trigger Business Logic) 
    
    .
    
    You see it's about the same. So, when calling the OnModify trigger, the code really isn't so different.
    

    Now, I have to say that I noticed something strange. When I monitored the code above, I noticed that it entered the OnModfy trigger, but Rec.Name and xRec.Name were the same ... (while I did change the value). This is something I will not go into right now (who knows another blog post?) - I will investigate it later. Bottom line is that it did what we expected (i.e. Calling the OnModify trigger).

    What do we learn from this:

    Except from the strange Rec-xRec behaviour, we can say that using the loop or using the MODIFYALL statement does not matter if you're calling the OnModify trigger on the table anyway.

    That's about all I can think of to test and monitor about the MODIFYALL statement. Let's take a deeper dive into the DELETEALL.

    DELETEALL

    I expect this to have the same behaviour, but to be sure, I will just execute about the same as above, but with the delete statements. Also, with a DELETEALL statement, you have a similar option: you can call the OnDelete trigger, or you could choose NOT to call it. Off course

    First: a DELETEALL without the OnDelete-Trigger:

    To make things as transparant as possible, I'll take similar code for this example.
    recCust.SETRANGE(...); 
    recCust.DELETEALL(FALSE);
    ======================
    DELETE FROM "CRONUS BELGIË NV$Customer" WHERE (("Language Code"='ENC'))
    ======================
    Only one statement to delete the entire range.
    

    This is exactly what I expected it to be, if you compare it with the MODIFYALL statement like described above. We also know what to expect from the equivalent:

    recCust.SETRANGE(...);
    IF recCust.FINDSET(TRUE,FALSE) THEN
    REPEAT
    recCust.DELETE(FALSE);
    UNTIL recCust.NEXT = 0;

    SELECT *,DATALENGTH("Picture") FROM "CRONUS BELGIË NV$Customer" WITH (UPDLOCK) WHERE (("Language Code"='ENC')) ORDER BY "No_"

    DELETE FROM "CRONUS BELGIË NV$Customer" WITH (REPEATABLEREAD) WHERE ("No_"='01905893')

    DELETE FROM "CRONUS BELGIË NV$Customer" WITH (REPEATABLEREAD) WHERE ("No_"='01905899')

    DELETE FROM "CRONUS BELGIË NV$Customer" WITH (REPEATABLEREAD) WHERE ("No_"='01905902')
    One select (with UPDLOCK because of the FINDSET(TRUE)) and for every record, a DELETE statement, ranged on the primary key

    And again, not really a surprise.
    We are not changing the values of fields here, so like the MODIFYALL statement, we do not have to worry about the number of fields that are changed, which had an influence in the decision wether to use MODIFY or MODIFYALL (and again, makes my life easier).

    What do we learn from this?

    When the OnDelete-trigger doesn't have to be called, then always go for the DELETEALL statement.

    Second: a DELETEALL with the OnDelete-Trigger:

    Well, we know what we have to expect here as well, don't we. If the MODIFYALL is going to call an UPDATE for each record, I expect the DELETEALL to do exactly the same:
    recCust.SETRANGE(...); 
    recCust.DELETEALL(TRUE);
    ====================== 
    SELECT *,DATALENGTH("Picture") FROM "CRONUS BELGIË NV$Customer" WITH (UPDLOCK) WHERE (("Language Code"='ENC')) ORDER BY "No_" 
    
    . 
    
    (OnDelete Trigger Business Logic) 
    
    . 
    
    DELETE FROM "CRONUS BELGIË NV$Customer" WITH (REPEATABLEREAD) WHERE ("No_"='01905893') 
    
    . 
    
    (OnDelete Trigger Business Logic) 
    
    . 
    
    DELETE FROM "CRONUS BELGIË NV$Customer" WITH (REPEATABLEREAD) WHERE ("No_"='01905899') 
    
    . 
    
    (OnDelete Trigger Business Logic) 
    
    .
    ======================
    First a SELECT with UPDLOCK to lock the range
    
    Then the OnDelete trigger business Logic
    
    Then the DELETE statement to delete the Customer record - ranged on the primary key
    


    Not really a surprise, is it (OK, I know, it's getting boring, but I want to finish the story ). To conclude, let's take a look at the alternative loop:
    recCust.SETRANGE(...); 
    IF recCust.FINDSET(TRUE,FALSE) THEN 
    REPEAT 
    recCust.DELETE(TRUE); 
    UNTIL recCust.NEXT = 0;
    ======================
    SELECT *,DATALENGTH("Picture") FROM "CRONUS BELGIË NV$Customer" WITH (UPDLOCK) WHERE (("Language Code"='ENC')) ORDER BY "No_" 
    
    . 
    
    (OnDelete Trigger Business Logic) 
    
    . 
    
    DELETE FROM "CRONUS BELGIË NV$Customer" WITH (REPEATABLEREAD) WHERE ("No_"='01905893') 
    
    . 
    
    (OnDelete Trigger Business Logic) 
    
    . 
    
    DELETE FROM "CRONUS BELGIË NV$Customer" WITH (REPEATABLEREAD) WHERE ("No_"='01905899') 
    
    . 
    
    (OnDelete Trigger Business Logic) 
    
    .
    ======================
    First a SELECT with UPDLOCK to lock the range
    
    Then the OnDelete trigger business Logic
    
    Then the DELETE statement to delete the Customer record - ranged on the primary key
    

    This is EXACTLY the same as the DELETEALL statement. But let's be honest ... Again, we expected this, didn't we ?

    What do we learn from this?

    It doesn't matter to use the DELETEALL(TRUE) or the alternative loop with the FINDSET(TRUE) and DELETE(TRUE). In SQL Server, the same code is executed. Now, you could argue with the fact "only two lines of code is executed in C/SIDE" instead when in a loop it is not. But I beg to differ on that, because it is C/SIDE or NAV or the ndbcs.dll (which is for me one environment) that is sending the statements to SQL Server. That means that it's still the client that is looping, wether it was intentional (with the LOOP) or not (with the DELETEALL). So for me: doesn't matter, whatever fits for you .

    So, this is all, I hope you enjoyed it. Next thing on the agenda (if you're still interested): Marking records.

    Eric Wauters
    MVP - Microsoft Dynamics NAV
    My blog
  • Slawek_GuzekSlawek_Guzek Member Posts: 1,690
    Hi,

    Thanks for that - very interesting reading...

    Regarding calling one line of C/AL code versus several lines - C/AL is interpreted language, so it does matter how many lines of code we are executing

    Let's do some simple tests:

    1.
    i := 0;
    dt := CURRENTDATETIME;
    WHILE I < 1000000 DO BEGIN
      i := i + 1;
    END;
    MESSAGE('WHILE loop + standard NAV code : %1',CURRENTDATETIME-cd);
    
    2.
    i := 0;
    dt := CURRENTDATETIME;
    WHILE I < 1000000 DO
      i := i + 1;
    MESSAGE('WHILE loop + standard NAV code and BEGIN/END : %1',CURRENTDATETIME-cd);
    
    3.
    i := 0;
    dt := CURRENTDATETIME;
    WHILE I < 1000000 DO
      i += 1;
    MESSAGE('WHILE loop + C-like code %1',CURRENTDATETIME-cd);
    
    4.
    dt := CURRENTDATETIME;
    FOR i:= 0 TO 1000000 DO
      ;
    MESSAGE('FOR loop: %1',CURRENTDATETIME-cd);
    
    5.
    dt := CURRENTDATETIME;
    FOR i:= 0 TO 1000000 DO
      i := i;
    MESSAGE('FOR loop + dummy code: %1',CURRENTDATETIME-cd);
    

    dt is variable of DateTime type, i is Integer

    My results were :

    1. 456 msec
    2. 922 msec
    3. 297 msec
    4. 47 msec
    5. 153 msec

    The difference is obvious, the question is only whether it is meaningful or meaningless, comparing to time of executing SQL query. My quess - probably not, but why don't do the things right for the first time :)
    Slawek Guzek
    Dynamics NAV, MS SQL Server, Wherescape RED;
    PRINCE2 Practitioner - License GR657010572SG
    GDPR Certified Data Protection Officer - PECB License DPCDPO1025070-2018-03
  • WaldoWaldo Member Posts: 3,412
    The next one... . Here is the (hopefully) better edited version.

    Time for the next chapter for the "what impact does my C/AL have on SQL"-story. I already covered the following in previous blogposts:

    What impact does my C/AL have on SQL
    This one was the first. It dealt mainly with the FIND statements, en the looping and modifying of records.
    What impact does my C/AL have on SQL - MODIFYALL/DELETEALL
    Because of a comment on the first blog post, I decided to elaborate a little bit. Namely, there is more then FIND statements. In this blog post, I dealt with the MODIFYALL and DELETALL.
    As you might have read in the previous post, I promised to write a little something around the marking of records.

    Not easy...

    During my "investigation" (if I may call it like that [:|]), I had my doubts to post this or not. It seemed difficult to make good, decent, funded conclusions ... . Therefore, I started to hesitate on my tests ... So why post this post at all? But it turned out that I decided to do it. I had put too much work into it and why not just put my results online, and let you decide what you conclude out of it? I just give my opinion, and off course, you're free to disagree (hopefully not too much [;)] ).

    What?

    You probably all know about marking records (hopefully - because else this post is of no use at all [;)]). When it's not possible to get the set of records you want by applying a filter or a combination of filters, a solution could be to mark the records and "view marked only". You can do it by selecting the records and press CTRL+F1. After selecting and marking a number of records, you can click View / Marked Only. There is your set. I could be explaining you this by a number of screenshots, arrows, text, ... but actually I don't want to encourage anyone to use it, so I'm not gonna [;)].

    You can mark records manually and through code, and in both circomstances, there is a lot to take into account.

    Manually

    Let's first do it manually and see what happens on SQL. These are the steps that I followed in my TEST database (5.0 SP0) while I was monitoring with the Client Monitor:

    Opened the Customer List
    Selected the first five records
    Marked them with CTRL+F1
    Selected the 7th record and marked that one
    Selected from 9th until the 13th record and marked them.
    At this time, the selection looked something like this.

    Next, I started the Client Monitor, because I'm only interested in what C/SIDE does to show only the records which I marked.
    Then I clicked "View" / "Marked Only" on the customer list
    Obviously, thiswas the result.

    Stopped the Client Monitor
    (damn, these were the screenshots I didn't want to use ... yeah well ... [:|]).

    Result in the Client monitor: NOTHING! The Client Monitor shows us that no SQL Statement was produced. This comes by the fact that it is able to use cached records. You can see it if you look at the "SQL Status" in the Client Monitor ... which is "Cached".

    Now, how the caching of records is going to behave, is hard to predict, so I started to play around. I wanted to achieve that I could view the marked only records without the records being cached ... so that C/SIDE would be obligated to go to SQL Server again. So I was playing around with changing some records, fetching some other records, ... basically just waiting a while. After a while (1 minute), I went back to my list (which was still open with the marked records you see above) and did the same as above (from step "f"). This was the resulting SQL commands:
    SELECT *,DATALENGTH("Picture") FROM "CRONUS BELGIË NV$Customer" WHERE "No_">='21252947' ORDER BY "No_" 
    SELECT TOP 1 NULL FROM "Record Link" WHERE (("Company"='CRONUS BELGIË NV')) AND (("Type"=0)) AND (("Record ID"=0x120000000089083231323532393437000000)) 
    SELECT *,DATALENGTH("Picture") FROM "CRONUS BELGIË NV$Customer" WHERE "No_"='27090917' 
    SELECT *,DATALENGTH("Picture") FROM "CRONUS BELGIË NV$Customer" WHERE "No_"='27321782' 
    SELECT *,DATALENGTH("Picture") FROM "CRONUS BELGIË NV$Customer" WHERE "No_"='45282829' 
    SELECT *,DATALENGTH("Picture") FROM "CRONUS BELGIË NV$Customer" WHERE "No_"='21245278' 
    SELECT *,DATALENGTH("Picture") FROM "CRONUS BELGIË NV$Customer" WHERE "No_"='21233572' 
    SELECT *,DATALENGTH("Picture") FROM "CRONUS BELGIË NV$Customer" WHERE "No_"='20312912' 
    SELECT *,DATALENGTH("Picture") FROM "CRONUS BELGIË NV$Customer" WHERE "No_"='20000' 
    SELECT *,DATALENGTH("Picture") FROM "CRONUS BELGIË NV$Customer" WHERE "No_"='10000' 
    SELECT *,DATALENGTH("Picture") FROM "CRONUS BELGIË NV$Customer" WHERE "No_"='01454545' 
    SELECT *,DATALENGTH("Picture") FROM "CRONUS BELGIË NV$Customer" WHERE "No_"='01445544' 
    SELECT *,DATALENGTH("Picture") FROM "CRONUS BELGIË NV$Customer" WHERE "No_"='01121212'
    

    This is more or less what we expected, isn't it? So, if the records are not cached (for some reason ...), you get an awful lot of SELECT-statements (as many as the number of records you marked). And in my opionion, this is something you want to avoid.

    Furthermore one extra comment. The SELECT-statements are based on the primary key, NOT on the clustered index. For fun, I changed the primary key of the G/L Entry to "G/L Account, Posting Date" (this can happen in a few occasions...). This resultated in a similar statement as before (a SELECT statement with a filter on the Entry No., which is primary key), but with a less efficient execution plan - not using the clustered index - and sometimes even a clustered index scan when performing this on some other tables! It all depends on the situation.

    Conclusion:

    The records are cached, which means that there is a machinism that is trying to handle "marking records" as efficientely as possible. Only, the records are not cached very long.
    For every marked record, it's possible NAV will generate a servercall, based on the primary key.
    Now, you might say: "OK, but usually, immediately after marking my records, I click "View/marked only" so most probably, my records will be cached, so a performance problem is highly unthinkable. You've got a point there... . Just know that this might be a factor to look at [:)] - and you see above how you can recognize similar statements ... .

    In C/AL

    We also have the option to use this feature in coding. Namely with the statements MARK and MARKEDONLY. When we're using this, it's not hard to imagine that this could result in marking a huge number of records . I decided to start with just marking every other record in the G/L Entry table with following code (I'm looping about 2860 records, which resulted in about 1430 marked records).
    recGLEntry.FINDSET; 
    REPEAT 
      recGLEntry.MARK(TRUE); 
    UNTIL recGLEntry.NEXT(2) <= 1; 
    recGLEntry.MARKEDONLY(TRUE); 
    recGLEntry.FINDSET; //Point to the first record to start looping 
    REPEAT 
    UNTIL recGLEntry.NEXT = 0;
    

    This happened in the client monitor:
    SELECT TOP 501 * FROM "CRONUS BELGIË NV$G_L Entry" WITH (READUNCOMMITTED) ORDER BY "Entry No_" 
    SELECT * FROM "CRONUS BELGIË NV$G_L Entry" WITH (READUNCOMMITTED) WHERE "Entry No_">? ORDER BY "Entry No_" 
    FETCH 20 
    FETCH 60 
    . 
    . 
    . 
    FETCH 60 
    SELECT TOP 1 NULL FROM "Record Link" WHERE (("Type"=?)) AND (("Record ID"=?))
    
    You can recognize the loop like I eplained in my first blogpost. Furthermore, there is not one SQL statement for the marking or looping of the marked records. All I could see as SQL Status: Cached! Hm. All that caching does not make my life simpler at the moment (or may be it just does, but I don't realize it [;)]). To be sure, I tested this with the SQL profiler as well, and it showed exactly the same behaviour, so it must be processing this loop of 1400 records locally in cache.

    Modifying data

    Let's go to the next level. Let's alter the data in our recordset. I just added some lines to my code:
    recGLEntry.FINDSET; 
    REPEAT 
      recGLEntry.MARK(TRUE); 
    UNTIL recGLEntry.NEXT(2) <= 1; 
    recGLEntry.MARKEDONLY(TRUE); 
    recGLEntry.FINDSET(TRUE); 
    REPEAT 
      recGLEntry.Description := recGLEntry."Description" + '1'; 
      recGLEntry.MODIFY; 
    UNTIL recGLEntry.NEXT = 0;
    

    This wat the output:
    SELECT TOP 501 * FROM "CRONUS BELGIË NV$G_L Entry" WITH (READUNCOMMITTED) ORDER BY "Entry No_" 
    SELECT * FROM "CRONUS BELGIË NV$G_L Entry" WITH (READUNCOMMITTED) WHERE "Entry No_">? ORDER BY "Entry No_" 
    FETCH 20 
    . 
    . 
    . 
    FETCH 60 
    FETCH 60 
    SELECT * FROM "CRONUS BELGIË NV$G_L Entry" WITH (UPDLOCK, REPEATABLEREAD) WHERE "Entry No_"=? 
    UPDATE "CRONUS BELGIË NV$G_L Entry" WITH (REPEATABLEREAD) SET "Description"=? WHERE ("Entry No_"=?) 
    SELECT * FROM "CRONUS BELGIË NV$G_L Entry" WITH (UPDLOCK) WHERE "Entry No_"=? 
    UPDATE "CRONUS BELGIË NV$G_L Entry" WITH (REPEATABLEREAD) SET "Description"=? WHERE ("Entry No_"=?) 
    . 
    . 
    .
    

    The first part (marking the records) you recognize from earlier. But now, for every record you're going to change, it's going to produce an extra servercall to lock the record (one by one, servercall by servercall). This means, two calls per record you're modifying.

    You realize this is not very efficient. You already read the data ... and now you're going to do it again to lock. Imagine you're writing code like this on a huge table on a huge database, with a clustered index different from the primary key ... this process can become a pain which will have to be reviewed... .

    One tip I can give you is easy. Simply lock the records in the first loop:
    recGLEntry.FINDSET(TRUE); 
    REPEAT 
      recGLEntry.MARK(TRUE); 
    UNTIL recGLEntry.NEXT(2) <= 1; 
    recGLEntry.MARKEDONLY(TRUE); 
    recGLEntry.FINDSET(TRUE); 
    REPEAT 
      recGLEntry.Description := recGLEntry.Description + '1'; 
      recGLEntry.MODIFY; 
    UNTIL recGLEntry.NEXT = 0;
    

    This results in:
    SELECT * FROM "CRONUS BELGIË NV$G_L Entry" WITH (UPDLOCK) ORDER BY "Entry No_" 
    FETCH 20 
    FETCH 60
    . 
    . 
    . 
    FETCH 60 
    UPDATE "CRONUS BELGIË NV$G_L Entry" WITH (REPEATABLEREAD) SET "Description"=? WHERE ("Entry No_"=?) 
    UPDATE "CRONUS BELGIË NV$G_L Entry" WITH (REPEATABLEREAD) SET "Description"=? WHERE ("Entry No_"=?) 
    UPDATE "CRONUS BELGIË NV$G_L Entry" WITH (REPEATABLEREAD) SET "Description"=?WHERE ("Entry No_"=?) 
    . 
    . 
    .
    

    Now, this is a TIP, not a solution. It's going to produce far less servercalls (de SELECTS in front of the UPDATES are gone), but it's going to lock your entire range (or even table in this case) if you're not careful and know what you're doing.

    I also realize that the business logic of the above C/AL code does not make much sense. It's better to modify in the first loop and not using "MARKEDONLY" at all. But it's to illustrate what's going on on SQL Server.

    COUNT

    Another thing where marking records might be useful for, is when counting a number of records that match a certain condition. Here is my code:
    recGLEntry.FINDSET; 
    REPEAT 
      recGLEntry.MARK(TRUE); 
    UNTIL recGLEntry.NEXT(2) <= 1; 
    recGLEntry.MARKEDONLY(TRUE); 
    MESSAGE(FORMAT(recGLEntry.COUNT));
    

    Result again:
    SELECT TOP 501 * FROM "CRONUS BELGIË NV$G_L Entry" WITH (READUNCOMMITTED) ORDER BY "Entry No_" 
    SELECT * FROM "CRONUS BELGIË NV$G_L Entry" WITH (READUNCOMMITTED) WHERE "Entry No_">? ORDER BY "Entry No_" 
    FETCH 20 
    FETCH 60 
    . 
    . 
    . 
    FETCH 60 
    SELECT TOP 1 NULL FROM "Record Link" WHERE (("Type"=?)) AND (("Record ID"=?))
    

    It is going to count in cache, not producing any extra servercalls. Actually, at this point, I expected this to happen... :).

    One more remark...

    When I was experimenting, I noticed another (may be small) detail. To loop your marked records, you have to position yourself to the beginning of the set . At this point (ONLY when marking records) it does not matter wether you use FIND('-'), FINDFIRST or FINDSET. It isn't going to produce any SQL statement. It's just going to point to the first record in cache. BUT, it does matter where you put it. Let C/SIDE know you're working with the markedonly set, so put the FIND-statement after the MARKEDONLY. If not, like this:
    recGLEntry.FINDSET; 
    REPEAT 
      recGLEntry.MARK(TRUE); 
    UNTIL recGLEntry.NEXT(2) <= 1; 
    recGLEntry.FINDSET; //Point to the first record to start looping 
    recGLEntry.MARKEDONLY(TRUE); 
    REPEAT 
    UNTIL recGLEntry.NEXT = 0;
    

    It'll produce one extra statement like below:
    SELECT TOP 501 * FROM "CRONUS BELGIË NV$G_L Entry" WITH (READUNCOMMITTED) ORDER BY "Entry No_" 
    SELECT * FROM "CRONUS BELGIË NV$G_L Entry" WITH (READUNCOMMITTED) WHERE "Entry No_">? ORDER BY "Entry No_" 
    FETCH 20 
    FETCH 60 
    . 
    . 
    . 
    FETCH 60 
    SELECT TOP 501 * FROM "CRONUS BELGIË NV$G_L Entry" WITH (READUNCOMMITTED) ORDER BY "Entry No_" 
    SELECT TOP 1 NULL FROM "Record Link" WHERE (("Type"=?)) AND (("Record ID"=?))
    

    The result seems the same, but the behaviour on SQL is different, as you can see... .

    CONCLUSION

    Like I said, I have a bit of mixed feelings about this blogpost. One feeling is that it's not complete. I have been thinking it through, and there are just a huge number of ways to use this, and a huge number of things where it might be useful for... . And did I cover them? I'm sure I didn't. Loops in loops, partly filtered sets, millions of records, ... are not covered in this blog. Let's just say I would like to encourage you to monitor your code.

    I'm also bothered with the caching. It's so unpredictable. It just does not produce consitent behaviour.

    Now, as you might have noticed, I'm not really in favour of using marking of records. I'm not the one that encourages you to use it. In my opinion, marking records is only useful when you can't set a filter. If you can't set the filter, that means you're going to loop too many records (the entire table?), marking the records, and then "filter your marked set". When looping the entire table, you're possibly locking too much, or reading too often, or whatever, so ... putting too much load on the database (imagine 20 users are doing this at the same time on a 20Gb table [:|]). Let's try to avoid it! Marking records is not avoiding that, but is (again, in my opinion) a possibility to create workarounds.

    What I would like to suggest is to put a little bit more effort in "fixing your actual problem", which is: making sure you can use a filter! This might involve creating an extra field, and write code to update the field (in all circomstances ...), which can be a huge job. But huge job or not, it's going to make sure you're going to the database more efficiently. And did I already mention this is only in my opinion? [;)]

    Feel free to give me your comments...

    Eric Wauters
    MVP - Microsoft Dynamics NAV
    My blog
  • WaldoWaldo Member Posts: 3,412
    lol.
    I can't. I recently (a few hours ago) got the flaming comment that I shouldn't use smileys in my blog. Hey, what can I say .. it's a habit (read: addiction) :lol::mrgreen: \:D/ :wink: .

    Eric Wauters
    MVP - Microsoft Dynamics NAV
    My blog
  • WaldoWaldo Member Posts: 3,412
    The next one is born.
    What impact does my C/AL have on SQL - SIFT on 5.0 Sp1
    Again, for a beter edited version (and for the screenshots) you should go to my blog... .

    Normally, I would have done an article about Sumindexfields on versions previous to 5.0 SP1, but I decided not to do that. The Belgian version of Microsoft Dynamics NAV 5.0 SP1 is just released, and the Belgian release is always pretty late. In fact ... I just checked ... every localized version is available right now. That's why I decided to dig a little bit into this version's new SIFT-technology on SQL Server first. Who knows I'm in the mood to write about the SIFT-technology in the previous versions, when I'm in the hospital in a few days (yes, I'm waiting for an operation on my knee [:(]) ... .

    What is SIFT?
    SIFT is an abbreviation for "Sum Index Flow Technology". Simply described: it's a technology that stores sums in the background to be able to quickly get summarized amounts. For the developers: SIFT tables contain records with summed values grouped by the fields of the key where you defined the SumIndexField on the table with the amountfield that should be summed (wow). For sales: SIFT is a technology that distinguishes NAV from any ERP system. For functional consultants: SIFT are sums. For the somewhat more experienced technical consultant who cares about performance: SIFT is a technology for speeding up the reading performance of summed fields, but should be carefully looked at because speeding up reading performance usually means slowing down writing performance.

    Main difference with old SIFT

    The biggest change they did in 5.0 SP1 is the fact that instead of SIFT tables, there are now SIFT views, which are indexed. This means, there is an efficient way to get results from the view (because of the fact that the results are 'stored' in a certain way), but it also means that it's a cost to maintain the view. May be not comparable with the dedicated SIFT table in the previous versions, but still ... . This also means that the triggers that existed to maintain the SIFT tables ... are gone ... which is a good thing!
    Now, there has been written quite some posts about this by the Sustained Engeneering Team on their blog and also, there is a whitepaper available on partnersource. Definitely worth reading it.

    Creating SIFT view


    A SIFT view is created when you configure a SumIndexfield on a key ... indeed, just the same as before. With the property "MaintainSIFTIndex", you can disable or enable the SIFT view. If this property is FALSE, the SIFT view will disappear.

    This example shows how NAV creates an indexed view:
    CREATE VIEW GLEntry$VSIFT$1 AS 
    
    SELECT SUM(Amount) as SUM$Amount, AccountNo, PostingDate 
    FROM GLEntry GROUP BY AccountNo,PostingDate* 
    
    CREATE UNIQUE CLUSTERED INDEX VSIFTIDX 
    ON GLEntry$VSIFT$1(AccountNo,PostingDate)*
    

    (pasted from the whitepaper)

    In SQL Server Management Studio, a similar view looks like this:

    <Look at my blog for the picture>

    It's very clear, isn't it? You can read it right away. It's just grouping by G/L Account and Posting date, and summing up all requested Amounts (SumIndexFields). We even have a 'Count' field, which indicates how many records exist for that grouping. But...

    Hey, where are my SIFT buckets?

    Gone! As described in the white paper: "The indexed view that is used for a SIFT key is always created at the most finely-grained level." That means: No SIFTLevels window in C/SIDE anymore, no buckets anymore, ... . Simplicity!

    But what if you would like to create another level, which may be not as detailed. This could be beneficiary! Well, again, simple:

    Just create a new key with the fields of the level you would like the create the SIFT view on
    If you don't want to maintain the key as index in SQL, disable MaintainSQLIndex
    Add the fields for the sums in the SumIndexField
    You see? It's just creating a new SIFT view on a new key. When you do this for the SIFT above, but only on "G/L Account No"-level, you get a SIFT view like this:

    <Look at my blog for the picture>

    Now, when and how is it used in C/SIDE

    In Flowfields, off course. How dare you ask that question?
    Everybody knows the Chart Of Accounts. You know contains some flowfields that are totalling the "G/L Entry"-records on this form. If you monitor it, you'll see SQL Server receives statements like this:
    SELECT SUM("SUM$Amount"),SUM("SUM$Amount") 
    FROM dbo."EN DEFAULT COMPANY$G_L Entry$VSIFT$1" WITH(NOEXPAND) 
    WHERE (("G_L Account No_"='705000'))
    
    So, it's just querying the SIFT view for getting the sum. Because of the fact that this view already has summed amounts, means that it's going to sum less values. But keep in mind the levels. Only the most detailed level is maintained. The statement above is going to sum only on G/L Account. This is the least detailed level. This level is not maintained anymore in 5.00 SP1, which means that (by default) it's going to sum up more records then before.

    What if I disable "MaintainSIFTIndex"?

    If we disable the MaintainSIFTIndex of the SIFT above, we expect C/SIDE to query the G/L Entry table directly. And indeed, this is the statement it's sending:
    SELECT SUM("Amount"),SUM("Amount") 
    FROM "EN DEFAULT COMPANY$G_L Entry" 
    WHERE (("G_L Account No_"='705000'))
    

    You can imagine this statement will have to sum more amounts then the statement above.

    Is there another way where the SIFT views will be used?

    Yep, it's still there: A statement that is often forgotten is the CALCSUMS statement. It's a way to use SumIndexFields without using (or creating) a flowfield on a table. To keep the same logic as above:
    CLEAR(recGLEntry); 
    recGLEntry.SETCURRENTKEY("G/L Account No."); 
    recGLEntry.SETRANGE("G/L Account No.",'705000'); 
    recGLEntry.CALCSUMS(Amount);
    

    You see I'm not using a flowfield, but I am using the SumIndexField of my G/L Entry table. This is the resulting statement in SQL Server:
    SELECT SUM("SUM$Amount") 
    FROM dbo."EN DEFAULT COMPANY$G_L Entry$VSIFT$1" 
    WITH (READUNCOMMITTED, NOEXPAND ) 
    WHERE (("G_L Account No_"='705000'))
    

    So, What about my ZeroSiftRecords?


    Probably everyone has heard of the ZeroSiftRecords-phenomenon. In the old days (version prior to 5.0 SP1 ... so really not that old [:)]) it could happen that SIFT tables contained records with summed values all equal to 0. The reason behind this was that every record in the main tables was deleted for that particular summed group. Sums of non existing records is 0.

    Now, we're working with views. These views are maintained by SQL Server automagically. It's actually some kind of projection of the main table in some way (a summed way). SQL Server is clever enough to exclude the zero-records. So, no ZeroSiftRecords anymore from now on. This is one maintenance job less [:)].

    So what's that $Cnt-field for?

    As you can see, there is also a $Cnt-field present in the SIFT View. It counts the number of "summing" records. But where is it used for. I tested a "Count"-flowfield without result:
    SELECT COUNT(*)
    FROM "EN DEFAULT COMPANY$G_L Entry" 
    WHERE (("G_L Account No_"='640100'))
    

    A normal "COUNT" in C/AL code:
    SELECT COUNT(*)
    FROM "EN DEFAULT COMPANY$G_L Entry" 
    WHERE (("G_L Account No_"='640100'))
    

    An "Avarage" flowfield:
    SELECT SUM("SUM$Amount") 
    FROM dbo."EN DEFAULT COMPANY$G_L Entry$VSIFT$1" 
    WITH (READUNCOMMITTED, NOEXPAND ) 
    WHERE (("G_L Account No_"='705000'))
    

    No use of the $Cnt-field whatsoever. May be it's used internally, I don't know. I couldn't think of any functionality or way to see where it's used (may be you got a clue?).

    SQL Server 2000

    My opionion about SQL2000 users is: "hey guys, we're in the year 2008"... . But that's only my opionion.

    Seriously, in this article it's stated very clear that maintaining an indexed view on SQL Server 2000 can be a time consuming process (because of the clustered index scans etc. ...) . Therefore, their recommendation is, when upgrading to 5.0 SP1, you should always consider upgrading to SQL Server 2005 as well.

    Do I need SQL Server 2005 Enterprise edition?

    If you read the comparison between the SQL Server 2005 Editions, you could conclude you need the expensive Enterprise edition to run Microsoft Dynamics 50 SP1. This is a wrong conclusion!

    Because of the fact the "Indexed View Matching"-functionality is not used by Dynamics NAV, this is no issue. This was beautifully explained in this blogpost by (once again) the Sustained Engineering Team as a response to my question a week earlier [:D].


    Ok, that's it, guys. I hope you enjoyed it. Please feel free to drop a comment with .. any .. uhm .. Comment [:)].

    Ps, after this article was finished, I noticed another very explanary blogpost from the UK Team. You can find it here.
    [/b]

    Eric Wauters
    MVP - Microsoft Dynamics NAV
    My blog
  • vanrofivanrofi Member Posts: 272
    Waldo,

    very informational, but I prefer Dr House on a sunday evening...

    Thx, was very helpfull.
  • WaldoWaldo Member Posts: 3,412
    vanrofi wrote:
    Waldo,

    very informational, but I prefer Dr House on a sunday evening...

    Thx, was very helpfull.

    Until you realize what you miss ... :wink:

    Eric Wauters
    MVP - Microsoft Dynamics NAV
    My blog
  • divyardivyar Member Posts: 17
    Hi,
    recCust.SETRANGE(...);
    recCust.DELETEALL(TRUE);

    Will these 2 statements call SQL select and delete statements

    recCust.SETRANGE(...);
    recCust.DELETEALL(FALSE);

    Will these 2 statements call SQL delete statement alone.

    [This is how its given in your code sample. Please clarify]
Sign In or Register to comment.