----------------------------------------------------- Customer.SETRANGE("Country Code", 'US'); Customer.FIND('-'); SELECT *,DATALENGTH("Picture") FROM "dbo"."Cronus$Customer" WITH (READUNCOMMITTED) WHERE (("Country Code"='US')) ORDER BY "No_" ----------------------------------------------------- Customer.SETRANGE("Country Code", 'US'); Customer.FINDFIRST; SELECT TOP 1 *,DATALENGTH("Picture") FROM "dbo"."Cronus$Customer" WITH (READUNCOMMITTED) WHERE (("Country Code"='US')) ORDER BY "No_" ----------------------------------------------------- Customer.SETRANGE("Country Code", 'US'); Customer.FIND('+'); SELECT *,DATALENGTH("Picture") FROM "dbo"."Cronus$Customer" WITH (READUNCOMMITTED) WHERE (("Country Code"='US')) ORDER BY "No_" DESC ----------------------------------------------------- Customer.SETRANGE("Country Code", 'US'); Customer.FINDLAST; SELECT TOP 1 *,DATALENGTH("Picture") FROM "dbo"."Cronus$Customer" WITH (READUNCOMMITTED) WHERE (("Country Code"='US')) ORDER BY "No_" DESC ----------------------------------------------------- Customer.SETRANGE("Country Code", 'US'); Customer.FINDSET; SELECT TOP 500 *, DATALENGTH("Picture") FROM "dbo"."Cronus$Customer" WITH (READUNCOMMITTED) WHERE (("Country Code"='US')) ORDER BY "No_" ----------------------------------------------------- Customer.SETRANGE("Country Code", 'US'); Customer.FINDSET(TRUE); SET TRANSACTION ISOLATION LEVEL SERIALIZABLE SELECT *,DATALENGTH("Picture") FROM "dbo"."Cronus$Customer" WITH (UPDLOCK) WHERE (("Country Code"='US')) ORDER BY "No_" ----------------------------------------------------- Customer.SETRANGE("Country Code", 'US'); IF Customer.ISEMPTY THEN; SELECT TOP 1 NULL FROM "dbo"."Cronus$Customer" WITH (READUNCOMMITTED) WHERE (("Country Code"='US')) ----------------------------------------------------- Customer.SETRANGE("Country Code", 'US'); IF Customer.COUNT <> 0 THEN; SELECT COUNT(*) FROM "dbo"."Cronus$Customer" WITH (READUNCOMMITTED) WHERE (("Country Code"='US')) ----------------------------------------------------- Customer.SETRANGE("Country Code", 'US'); IF Customer.COUNTAPPROX <> 0 THEN; SET SHOWPLAN_ALL ON SELECT * FROM "dbo"."Cronus$Customer" WITH (READUNCOMMITTED) WHERE (("Country Code"='US')) ----------------------------------------------------- GLEntry.LOCKTABLE; GLEntry.FIND('+'); SET TRANSACTION ISOLATION LEVEL SERIALIZABLE SELECT * FROM "dbo"."Cronus$G_L Entry" WITH (UPDLOCK) ORDER BY "Entry No_" DESC ----------------------------------------------------- GLEntry.LOCKTABLE; GLEntry.FINDLAST; SET TRANSACTION ISOLATION LEVEL SERIALIZABLE SELECT TOP 1 * FROM "dbo"."Cronus$G_L Entry" WITH (UPDLOCK) ORDER BY "Entry No_" DESC
recCust.GET('10000'); recCust.VALIDATE(Name, 'Van Terp Kantoorinrichting'); recCust.MODIFY(TRUE);
----------------------------------------------------- recCust.GET('10000'); SELECT *,DATALENGTH("Picture") FROM "CRONUS BELGIË NV$Customer" WITH (READUNCOMMITTED) WHERE "No_"='10000' Read the data without setting a lock (you didn't specify this) ----------------------------------------------------- recCust.VALIDATE(...); recCust.MODIFY(TRUE); SELECT *,DATALENGTH("Picture") FROM "CRONUS BELGIË NV$Customer" WITH (UPDLOCK, REPEATABLEREAD) WHERE "No_"='10000' UPDATE "CRONUS BELGIË NV$Customer" WITH (REPEATABLEREAD) SET ... Read the data again and place a lock. Update the fields. -----------------------------------------------------
recCust.LOCKTABLE; recCust.GET('10000'); recCust.VALIDATE(Name, 'Van Terp Kantoorinrichting'); recCust.MODIFY(TRUE);To make the comparison complete, here is what happens behind the scenes:
----------------------------------------------------- recCust.LOCKTABLE; recCust.GET('10000'); SELECT *,DATALENGTH("Picture") FROM "CRONUS BELGIË NV$Customer" WITH (UPDLOCK, REPEATABLEREAD) WHERE "No_"='10000' Read the data with setting a lock ----------------------------------------------------- recCust.VALIDATE(...); recCust.MODIFY(TRUE); UPDATE "CRONUS BELGIË NV$Customer" WITH (REPEATABLEREAD) SET ... -----------------------------------------------------
IF ICHandledInboxTransaction.FIND('-') THEN BEGIN ICHandledInboxTransaction.LOCKTABLE; ICHandledInboxTransaction.Status := ICHandledInboxTransaction.Status::Posted; ICHandledInboxTransaction.MODIFY; END;
IF RECORDLEVELLOCKING THEN ICHandledInboxTransaction.LOCKTABLE; IF ICHandledInboxTransaction.FINDFIRST THEN BEGIN ICHandledInboxTransaction.LOCKTABLE; ICHandledInboxTransaction.Status := ICHandledInboxTransaction.Status::Posted; ICHandledInboxTransaction.MODIFY; END;
IF SalesLine.FIND('-') THEN Cust.CheckBlockedCustOnDocs(Cust,"Document Type",TRUE,TRUE);Could be replaced by
IF NOT SalesLine.ISEMPTY THEN Cust.CheckBlockedCustOnDocs(Cust,"Document Type",TRUE,TRUE);WHEN LOOPING DATA
----------------------------------------------------- IF recCust.FINDFIRST THEN REPEAT UNTIL recCust.NEXT = 0; SELECT TOP 1 *,DATALENGTH("Picture") FROM "CRONUS BELGIË NV$Customer" WITH (READUNCOMMITTED) ORDER BY "No_" SELECT *,DATALENGTH("Picture") FROM "CRONUS BELGIË NV$Customer" WITH (READUNCOMMITTED) WHERE "No_">'01121212' ORDER BY "No_" FETCH 5 FETCH 20 FETCH 40 FETCH 40 ----------------------------------------------------- IF recCust.FINDSET THEN REPEAT UNTIL recCust.NEXT = 0; SELECT TOP 501 *,DATALENGTH("Picture") FROM "CRONUS BELGIË NV$Customer" WITH (READUNCOMMITTED) ORDER BY "No_" ----------------------------------------------------- IF recCust.FIND('-') THEN REPEAT UNTIL recCust.NEXT = 0; SELECT *,DATALENGTH("Picture") FROM "CRONUS BELGIË NV$Customer" WITH (READUNCOMMITTED) ORDER BY "No_" FETCH 5 FETCH 20 FETCH 40 FETCH 40 -----------------------------------------------------What do we learn from this:
----------------------------------------------------- i := 0; IF recGL.FIND('-') THEN REPEAT i := i + 1; UNTIL recGL.NEXT = 0; MESSAGE('%1',i); SELECT * FROM "CRONUS BELGIË NV$G_L Entry" WITH (READUNCOMMITTED) ORDER BY "Entry No_" FETCH 5 FETCH 20 FETCH 60 FETCH 60 FETCH 60 . . . A total of 2845 "FETCHes" ----------------------------------------------------- i := 0; IF recGL.FINDSET THEN REPEAT i := i + 1; UNTIL recGL.NEXT = 0; MESSAGE('%1',i); SELECT TOP 501 * FROM "CRONUS BELGIË NV$G_L Entry" WITH (READUNCOMMITTED) ORDER BY "Entry No_" SELECT * FROM "CRONUS BELGIË NV$G_L Entry" WITH (READUNCOMMITTED) WHERE "Entry No_">501 ORDER BY "Entry No_" FETCH 5 FETCH 20 FETCH 60 FETCH 60 FETCH 60 . . . A total of 2365 "FETCHes" -----------------------------------------------------In case of the FINDSET: A first impression shows you the when the resultset is bigger than 500 records, it just throws an extra database call without the "TOP 500" and is going to cursor it's way through the records. When you examine deeper, you conclude that it's actually looping through the recordset of 500 records, and after that, NAV does not call the next 500 records, but just trows a "SELECT *" and is going to cursor it's way through the rest of the records.
recGL.LOCKTABLE; IF recGL.FINDSET THEN REPEAT ... UNTIL recGL.NEXT = 0;
IF recGL.FINDSET(TRUE) THEN REPEAT ... UNTIL recGL.NEXT = 0;
recCust.SETRANGE("Country/Region Code", 'BE'); IF recCust.FINDSET(TRUE) THEN REPEAT recCust.VALIDATE("Country/Region Code", 'US'); recCust.MODIFY; UNTIL recCust.NEXT = 0;
recCust.SETRANGE("Country/Region Code", 'BE'); IF recCust.FINDSET(TRUE) THEN REPEAT recCust2 := recCust; recCust2.VALIDATE("Country/Region Code", 'US'); recCust2.MODIFY; UNTIL recCust.NEXT = 0;
Comments
Second - a releated question. When applying filters and ranges to a recordset in SQL I've always been told that you should apply the filters in order of the key to ensure that the underlying navision engine translates the call correctly to SQL. Is this infact true? If so why?
Here's an example of what I mean. If you have a key on Item Ledger (Table 32) of "Item No.","Location Code","Posting Date" - and you wanted all records from location 'BLUE' on January first. I've always followed the practice of coding the following:
So in summary, if running on SQL does the order that I apply my filters really make a difference to the SQL code sent to the SQL server? Also what performance implications/problems could there be?
Epimatic Corp.
http://www.epimatic.com
you unveil codes behind CAL when using SQL. I have a question regard to "locking". you said that every we "GET" data and Modified data when looking, we use rec.LOCKTABLE. if we always use this when read and modify data, is it true that we will have blocking or maybe deadlock problem? especially if the modified data is very big and there a lot of users modified the table.
regards,
Angelo
Locking is a normal behaviour. you can't avoid locking when you're modifying data. You just have to see you don't lock to many records, plus you have to try to lock always in the same order ... (to avoid deadlocks).
Eric Wauters
MVP - Microsoft Dynamics NAV
My blog
Well,
I simply tested this out with the client monitor.
I took your code: and
Every time,I changed order, I got the same SQL code... so my conclusion would be: don't matter.
Eric Wauters
MVP - Microsoft Dynamics NAV
My blog
No PM,please use the forum. || May the <SOLVED>-attribute be in your title!
Enjoy!
Time for my kids to put asleep a bit earlier, so that I have some time to talk about some other statements. This time, I will try to go deeper into the MODIFYALL- and the DELETEALL-statement. Hope you enjoy it . I know I will (don't know where I will end up ).
MODIFYALL
Good to know about the MODIFYALL statement is that the OnValidate trigger is a "no-go". You won't be able to call the OnValidate trigger of the fields you're modifying. This is very important to know, off course: Only use the MODIFYALL if no OnValidate-trigger is necessary (OK, I said it three times now in three sentences in three ways ).
The OnModify trigger is possible, even with the MODIFYALL statement.
First: a MODIFYALL without the OnModify-Trigger:
So, what is the loop variant?
It is easy to see what statement is best to use in this case: The MODIFYALL, off course. There is only one statement to the server, and all records are modified. Now, what about updating more than one field (Like Erik P. Ernst asked as comment on my previous post)? Let's take similar code for that. I'll update three fields:
The looping variant:
Remark:
It will only put those fields in the statements that were actually updated!
You see that the MODIFY and the MODIFYALL statements are behaving a little bit different. The MODIFYALL does not group the changes of the fields into one statement, while the MODIFY-statement does do that. Easy to remember: every MODIFY and MODIFYALL produces a servercall.
What do we learn from this?
What's best to use (when no Modify-trigger has to be executed): It actually depends on the number of records you expect in your range:
If you expect a huge number of records in the range and only a few fields to update: use the MODIFYALL
If you expect only a few records, less than the number of fields to update, you might want to go for the MODIFY in the loop.
Note that the loop always produces a SELECT statement. Be sure to send your lock together with this statement by using FINDSET(TRUE,...) or LOCKTABLE (see previous post for more info).
Second: A MODIFYALL with the OnModify-trigger
So tell me, how many times do you NOT want to call the MODIFY trigger? Not too many, do you? Let's see what happens when we do want that the OnModify trigger is called. Obviously, I will take the same code as above, but change the parameter:
You can clearly see that the OnModify trigger was called. To be able to do this, the update isn't combined in one anymore. In fact, above result is exactly the same as doing all this in a loop:
The SELECT to range the loop (off course with the UPDLOCK)
An update, ranged on the primary key
So, if we do the same thing in a loop in stead of the MODIFYALL, you get:
Now, I have to say that I noticed something strange. When I monitored the code above, I noticed that it entered the OnModfy trigger, but Rec.Name and xRec.Name were the same ... (while I did change the value). This is something I will not go into right now (who knows another blog post?) - I will investigate it later. Bottom line is that it did what we expected (i.e. Calling the OnModify trigger).
What do we learn from this:
Except from the strange Rec-xRec behaviour, we can say that using the loop or using the MODIFYALL statement does not matter if you're calling the OnModify trigger on the table anyway.
That's about all I can think of to test and monitor about the MODIFYALL statement. Let's take a deeper dive into the DELETEALL.
DELETEALL
I expect this to have the same behaviour, but to be sure, I will just execute about the same as above, but with the delete statements. Also, with a DELETEALL statement, you have a similar option: you can call the OnDelete trigger, or you could choose NOT to call it. Off course
First: a DELETEALL without the OnDelete-Trigger:
To make things as transparant as possible, I'll take similar code for this example.
This is exactly what I expected it to be, if you compare it with the MODIFYALL statement like described above. We also know what to expect from the equivalent:
recCust.SETRANGE(...);
IF recCust.FINDSET(TRUE,FALSE) THEN
REPEAT
recCust.DELETE(FALSE);
UNTIL recCust.NEXT = 0;
SELECT *,DATALENGTH("Picture") FROM "CRONUS BELGIË NV$Customer" WITH (UPDLOCK) WHERE (("Language Code"='ENC')) ORDER BY "No_"
DELETE FROM "CRONUS BELGIË NV$Customer" WITH (REPEATABLEREAD) WHERE ("No_"='01905893')
DELETE FROM "CRONUS BELGIË NV$Customer" WITH (REPEATABLEREAD) WHERE ("No_"='01905899')
DELETE FROM "CRONUS BELGIË NV$Customer" WITH (REPEATABLEREAD) WHERE ("No_"='01905902')
One select (with UPDLOCK because of the FINDSET(TRUE)) and for every record, a DELETE statement, ranged on the primary key
And again, not really a surprise.
We are not changing the values of fields here, so like the MODIFYALL statement, we do not have to worry about the number of fields that are changed, which had an influence in the decision wether to use MODIFY or MODIFYALL (and again, makes my life easier).
What do we learn from this?
When the OnDelete-trigger doesn't have to be called, then always go for the DELETEALL statement.
Second: a DELETEALL with the OnDelete-Trigger:
Well, we know what we have to expect here as well, don't we. If the MODIFYALL is going to call an UPDATE for each record, I expect the DELETEALL to do exactly the same:
Not really a surprise, is it (OK, I know, it's getting boring, but I want to finish the story ). To conclude, let's take a look at the alternative loop:
This is EXACTLY the same as the DELETEALL statement. But let's be honest ... Again, we expected this, didn't we ?
What do we learn from this?
It doesn't matter to use the DELETEALL(TRUE) or the alternative loop with the FINDSET(TRUE) and DELETE(TRUE). In SQL Server, the same code is executed. Now, you could argue with the fact "only two lines of code is executed in C/SIDE" instead when in a loop it is not. But I beg to differ on that, because it is C/SIDE or NAV or the ndbcs.dll (which is for me one environment) that is sending the statements to SQL Server. That means that it's still the client that is looping, wether it was intentional (with the LOOP) or not (with the DELETEALL). So for me: doesn't matter, whatever fits for you .
So, this is all, I hope you enjoyed it. Next thing on the agenda (if you're still interested): Marking records.
Eric Wauters
MVP - Microsoft Dynamics NAV
My blog
Thanks for that - very interesting reading...
Regarding calling one line of C/AL code versus several lines - C/AL is interpreted language, so it does matter how many lines of code we are executing
Let's do some simple tests:
1. 2. 3. 4. 5.
dt is variable of DateTime type, i is Integer
My results were :
1. 456 msec
2. 922 msec
3. 297 msec
4. 47 msec
5. 153 msec
The difference is obvious, the question is only whether it is meaningful or meaningless, comparing to time of executing SQL query. My quess - probably not, but why don't do the things right for the first time
Dynamics NAV, MS SQL Server, Wherescape RED;
PRINCE2 Practitioner - License GR657010572SG
GDPR Certified Data Protection Officer - PECB License DPCDPO1025070-2018-03
Time for the next chapter for the "what impact does my C/AL have on SQL"-story. I already covered the following in previous blogposts:
What impact does my C/AL have on SQL
This one was the first. It dealt mainly with the FIND statements, en the looping and modifying of records.
What impact does my C/AL have on SQL - MODIFYALL/DELETEALL
Because of a comment on the first blog post, I decided to elaborate a little bit. Namely, there is more then FIND statements. In this blog post, I dealt with the MODIFYALL and DELETALL.
As you might have read in the previous post, I promised to write a little something around the marking of records.
Not easy...
During my "investigation" (if I may call it like that [:|]), I had my doubts to post this or not. It seemed difficult to make good, decent, funded conclusions ... . Therefore, I started to hesitate on my tests ... So why post this post at all? But it turned out that I decided to do it. I had put too much work into it and why not just put my results online, and let you decide what you conclude out of it? I just give my opinion, and off course, you're free to disagree (hopefully not too much [;)] ).
What?
You probably all know about marking records (hopefully - because else this post is of no use at all [;)]). When it's not possible to get the set of records you want by applying a filter or a combination of filters, a solution could be to mark the records and "view marked only". You can do it by selecting the records and press CTRL+F1. After selecting and marking a number of records, you can click View / Marked Only. There is your set. I could be explaining you this by a number of screenshots, arrows, text, ... but actually I don't want to encourage anyone to use it, so I'm not gonna [;)].
You can mark records manually and through code, and in both circomstances, there is a lot to take into account.
Manually
Let's first do it manually and see what happens on SQL. These are the steps that I followed in my TEST database (5.0 SP0) while I was monitoring with the Client Monitor:
Opened the Customer List
Selected the first five records
Marked them with CTRL+F1
Selected the 7th record and marked that one
Selected from 9th until the 13th record and marked them.
At this time, the selection looked something like this.
Next, I started the Client Monitor, because I'm only interested in what C/SIDE does to show only the records which I marked.
Then I clicked "View" / "Marked Only" on the customer list
Obviously, thiswas the result.
Stopped the Client Monitor
(damn, these were the screenshots I didn't want to use ... yeah well ... [:|]).
Result in the Client monitor: NOTHING! The Client Monitor shows us that no SQL Statement was produced. This comes by the fact that it is able to use cached records. You can see it if you look at the "SQL Status" in the Client Monitor ... which is "Cached".
Now, how the caching of records is going to behave, is hard to predict, so I started to play around. I wanted to achieve that I could view the marked only records without the records being cached ... so that C/SIDE would be obligated to go to SQL Server again. So I was playing around with changing some records, fetching some other records, ... basically just waiting a while. After a while (1 minute), I went back to my list (which was still open with the marked records you see above) and did the same as above (from step "f"). This was the resulting SQL commands:
This is more or less what we expected, isn't it? So, if the records are not cached (for some reason ...), you get an awful lot of SELECT-statements (as many as the number of records you marked). And in my opionion, this is something you want to avoid.
Furthermore one extra comment. The SELECT-statements are based on the primary key, NOT on the clustered index. For fun, I changed the primary key of the G/L Entry to "G/L Account, Posting Date" (this can happen in a few occasions...). This resultated in a similar statement as before (a SELECT statement with a filter on the Entry No., which is primary key), but with a less efficient execution plan - not using the clustered index - and sometimes even a clustered index scan when performing this on some other tables! It all depends on the situation.
Conclusion:
The records are cached, which means that there is a machinism that is trying to handle "marking records" as efficientely as possible. Only, the records are not cached very long.
For every marked record, it's possible NAV will generate a servercall, based on the primary key.
Now, you might say: "OK, but usually, immediately after marking my records, I click "View/marked only" so most probably, my records will be cached, so a performance problem is highly unthinkable. You've got a point there... . Just know that this might be a factor to look at [:)] - and you see above how you can recognize similar statements ... .
In C/AL
We also have the option to use this feature in coding. Namely with the statements MARK and MARKEDONLY. When we're using this, it's not hard to imagine that this could result in marking a huge number of records . I decided to start with just marking every other record in the G/L Entry table with following code (I'm looping about 2860 records, which resulted in about 1430 marked records).
This happened in the client monitor:
You can recognize the loop like I eplained in my first blogpost. Furthermore, there is not one SQL statement for the marking or looping of the marked records. All I could see as SQL Status: Cached! Hm. All that caching does not make my life simpler at the moment (or may be it just does, but I don't realize it [;)]). To be sure, I tested this with the SQL profiler as well, and it showed exactly the same behaviour, so it must be processing this loop of 1400 records locally in cache.
Modifying data
Let's go to the next level. Let's alter the data in our recordset. I just added some lines to my code:
This wat the output:
The first part (marking the records) you recognize from earlier. But now, for every record you're going to change, it's going to produce an extra servercall to lock the record (one by one, servercall by servercall). This means, two calls per record you're modifying.
You realize this is not very efficient. You already read the data ... and now you're going to do it again to lock. Imagine you're writing code like this on a huge table on a huge database, with a clustered index different from the primary key ... this process can become a pain which will have to be reviewed... .
One tip I can give you is easy. Simply lock the records in the first loop:
This results in:
Now, this is a TIP, not a solution. It's going to produce far less servercalls (de SELECTS in front of the UPDATES are gone), but it's going to lock your entire range (or even table in this case) if you're not careful and know what you're doing.
I also realize that the business logic of the above C/AL code does not make much sense. It's better to modify in the first loop and not using "MARKEDONLY" at all. But it's to illustrate what's going on on SQL Server.
COUNT
Another thing where marking records might be useful for, is when counting a number of records that match a certain condition. Here is my code:
Result again:
It is going to count in cache, not producing any extra servercalls. Actually, at this point, I expected this to happen... .
One more remark...
When I was experimenting, I noticed another (may be small) detail. To loop your marked records, you have to position yourself to the beginning of the set . At this point (ONLY when marking records) it does not matter wether you use FIND('-'), FINDFIRST or FINDSET. It isn't going to produce any SQL statement. It's just going to point to the first record in cache. BUT, it does matter where you put it. Let C/SIDE know you're working with the markedonly set, so put the FIND-statement after the MARKEDONLY. If not, like this:
It'll produce one extra statement like below:
The result seems the same, but the behaviour on SQL is different, as you can see... .
CONCLUSION
Like I said, I have a bit of mixed feelings about this blogpost. One feeling is that it's not complete. I have been thinking it through, and there are just a huge number of ways to use this, and a huge number of things where it might be useful for... . And did I cover them? I'm sure I didn't. Loops in loops, partly filtered sets, millions of records, ... are not covered in this blog. Let's just say I would like to encourage you to monitor your code.
I'm also bothered with the caching. It's so unpredictable. It just does not produce consitent behaviour.
Now, as you might have noticed, I'm not really in favour of using marking of records. I'm not the one that encourages you to use it. In my opinion, marking records is only useful when you can't set a filter. If you can't set the filter, that means you're going to loop too many records (the entire table?), marking the records, and then "filter your marked set". When looping the entire table, you're possibly locking too much, or reading too often, or whatever, so ... putting too much load on the database (imagine 20 users are doing this at the same time on a 20Gb table [:|]). Let's try to avoid it! Marking records is not avoiding that, but is (again, in my opinion) a possibility to create workarounds.
What I would like to suggest is to put a little bit more effort in "fixing your actual problem", which is: making sure you can use a filter! This might involve creating an extra field, and write code to update the field (in all circomstances ...), which can be a huge job. But huge job or not, it's going to make sure you're going to the database more efficiently. And did I already mention this is only in my opinion? [;)]
Feel free to give me your comments...
Eric Wauters
MVP - Microsoft Dynamics NAV
My blog
AP Commerce, Inc. = where I work
Getting Started with Dynamics NAV 2013 Application Development = my book
Implementing Microsoft Dynamics NAV - 3rd Edition = my 2nd book
I can't. I recently (a few hours ago) got the flaming comment that I shouldn't use smileys in my blog. Hey, what can I say .. it's a habit (read: addiction) \:D/ .
Eric Wauters
MVP - Microsoft Dynamics NAV
My blog
What impact does my C/AL have on SQL - SIFT on 5.0 Sp1
Again, for a beter edited version (and for the screenshots) you should go to my blog... .
Normally, I would have done an article about Sumindexfields on versions previous to 5.0 SP1, but I decided not to do that. The Belgian version of Microsoft Dynamics NAV 5.0 SP1 is just released, and the Belgian release is always pretty late. In fact ... I just checked ... every localized version is available right now. That's why I decided to dig a little bit into this version's new SIFT-technology on SQL Server first. Who knows I'm in the mood to write about the SIFT-technology in the previous versions, when I'm in the hospital in a few days (yes, I'm waiting for an operation on my knee [:(]) ... .
What is SIFT?
SIFT is an abbreviation for "Sum Index Flow Technology". Simply described: it's a technology that stores sums in the background to be able to quickly get summarized amounts. For the developers: SIFT tables contain records with summed values grouped by the fields of the key where you defined the SumIndexField on the table with the amountfield that should be summed (wow). For sales: SIFT is a technology that distinguishes NAV from any ERP system. For functional consultants: SIFT are sums. For the somewhat more experienced technical consultant who cares about performance: SIFT is a technology for speeding up the reading performance of summed fields, but should be carefully looked at because speeding up reading performance usually means slowing down writing performance.
Main difference with old SIFT
The biggest change they did in 5.0 SP1 is the fact that instead of SIFT tables, there are now SIFT views, which are indexed. This means, there is an efficient way to get results from the view (because of the fact that the results are 'stored' in a certain way), but it also means that it's a cost to maintain the view. May be not comparable with the dedicated SIFT table in the previous versions, but still ... . This also means that the triggers that existed to maintain the SIFT tables ... are gone ... which is a good thing!
Now, there has been written quite some posts about this by the Sustained Engeneering Team on their blog and also, there is a whitepaper available on partnersource. Definitely worth reading it.
Creating SIFT view
A SIFT view is created when you configure a SumIndexfield on a key ... indeed, just the same as before. With the property "MaintainSIFTIndex", you can disable or enable the SIFT view. If this property is FALSE, the SIFT view will disappear.
This example shows how NAV creates an indexed view:
(pasted from the whitepaper)
In SQL Server Management Studio, a similar view looks like this:
<Look at my blog for the picture>
It's very clear, isn't it? You can read it right away. It's just grouping by G/L Account and Posting date, and summing up all requested Amounts (SumIndexFields). We even have a 'Count' field, which indicates how many records exist for that grouping. But...
Hey, where are my SIFT buckets?
Gone! As described in the white paper: "The indexed view that is used for a SIFT key is always created at the most finely-grained level." That means: No SIFTLevels window in C/SIDE anymore, no buckets anymore, ... . Simplicity!
But what if you would like to create another level, which may be not as detailed. This could be beneficiary! Well, again, simple:
Just create a new key with the fields of the level you would like the create the SIFT view on
If you don't want to maintain the key as index in SQL, disable MaintainSQLIndex
Add the fields for the sums in the SumIndexField
You see? It's just creating a new SIFT view on a new key. When you do this for the SIFT above, but only on "G/L Account No"-level, you get a SIFT view like this:
<Look at my blog for the picture>
Now, when and how is it used in C/SIDE
In Flowfields, off course. How dare you ask that question?
Everybody knows the Chart Of Accounts. You know contains some flowfields that are totalling the "G/L Entry"-records on this form. If you monitor it, you'll see SQL Server receives statements like this:
So, it's just querying the SIFT view for getting the sum. Because of the fact that this view already has summed amounts, means that it's going to sum less values. But keep in mind the levels. Only the most detailed level is maintained. The statement above is going to sum only on G/L Account. This is the least detailed level. This level is not maintained anymore in 5.00 SP1, which means that (by default) it's going to sum up more records then before.
What if I disable "MaintainSIFTIndex"?
If we disable the MaintainSIFTIndex of the SIFT above, we expect C/SIDE to query the G/L Entry table directly. And indeed, this is the statement it's sending:
You can imagine this statement will have to sum more amounts then the statement above.
Is there another way where the SIFT views will be used?
Yep, it's still there: A statement that is often forgotten is the CALCSUMS statement. It's a way to use SumIndexFields without using (or creating) a flowfield on a table. To keep the same logic as above:
You see I'm not using a flowfield, but I am using the SumIndexField of my G/L Entry table. This is the resulting statement in SQL Server:
So, What about my ZeroSiftRecords?
Probably everyone has heard of the ZeroSiftRecords-phenomenon. In the old days (version prior to 5.0 SP1 ... so really not that old [:)]) it could happen that SIFT tables contained records with summed values all equal to 0. The reason behind this was that every record in the main tables was deleted for that particular summed group. Sums of non existing records is 0.
Now, we're working with views. These views are maintained by SQL Server automagically. It's actually some kind of projection of the main table in some way (a summed way). SQL Server is clever enough to exclude the zero-records. So, no ZeroSiftRecords anymore from now on. This is one maintenance job less [:)].
So what's that $Cnt-field for?
As you can see, there is also a $Cnt-field present in the SIFT View. It counts the number of "summing" records. But where is it used for. I tested a "Count"-flowfield without result:
A normal "COUNT" in C/AL code:
An "Avarage" flowfield:
No use of the $Cnt-field whatsoever. May be it's used internally, I don't know. I couldn't think of any functionality or way to see where it's used (may be you got a clue?).
SQL Server 2000
My opionion about SQL2000 users is: "hey guys, we're in the year 2008"... . But that's only my opionion.
Seriously, in this article it's stated very clear that maintaining an indexed view on SQL Server 2000 can be a time consuming process (because of the clustered index scans etc. ...) . Therefore, their recommendation is, when upgrading to 5.0 SP1, you should always consider upgrading to SQL Server 2005 as well.
Do I need SQL Server 2005 Enterprise edition?
If you read the comparison between the SQL Server 2005 Editions, you could conclude you need the expensive Enterprise edition to run Microsoft Dynamics 50 SP1. This is a wrong conclusion!
Because of the fact the "Indexed View Matching"-functionality is not used by Dynamics NAV, this is no issue. This was beautifully explained in this blogpost by (once again) the Sustained Engineering Team as a response to my question a week earlier [:D].
Ok, that's it, guys. I hope you enjoyed it. Please feel free to drop a comment with .. any .. uhm .. Comment [:)].
Ps, after this article was finished, I noticed another very explanary blogpost from the UK Team. You can find it here.
[/b]
Eric Wauters
MVP - Microsoft Dynamics NAV
My blog
very informational, but I prefer Dr House on a sunday evening...
Thx, was very helpfull.
Until you realize what you miss ...
Eric Wauters
MVP - Microsoft Dynamics NAV
My blog
recCust.SETRANGE(...);
recCust.DELETEALL(TRUE);
Will these 2 statements call SQL select and delete statements
recCust.SETRANGE(...);
recCust.DELETEALL(FALSE);
Will these 2 statements call SQL delete statement alone.
[This is how its given in your code sample. Please clarify]