hi everyone,
I have a big item ledger enty table to manage (at least, it's big for me, because i've never worked with high data volumes). It's 9'700'000 records, and about the 75% of its data refers to one single item code.
Provided the always useful advices about hw, indexes, network, application...Do some sql expert know some hints/advices/best practices to manage this particular kind of data distribution?
Actually, I'm (still) not having a lot of problems (except for some weird behaviour of pages that chooses wrong indexes), but i guess that some issues will arise after the go-live.
thanks in advance for any input!
-Mirko-
"Never memorize what you can easily find in a book".....Or Mibuso
My Blog0
Comments
They need the history of all the serial nos. and lots from "day 1". (At least, this is what have been decided by our project manager and the customer).
Obviously, non traced item are not carried over.
"Never memorize what you can easily find in a book".....Or Mibuso
My Blog
if its just to track 9.7 million serial numbers, then form day one you need to work out a different way to track them, becasue tracking this volume of serial numbers using standard Navision will not work. You need to look at a separate table to store them.
Mind you if the client does decide to go this direction, please keep my number handy for when you need some performance tuning.
Maybe he also needs to read my blog.
The most powerful tool that a Dynamics NAV consultant can use.
Even if you have a great knowledge of NAV, I can't accept someone who don't know "the whole story" to blame the PM. It's a person i've been knowing for 5 years, he knows what he's doing and he's always done his job very good with the max effort. I'm gonna ask him if we really need ALL the ILE.
Btw, i think that there are NAV DBs with much more ILE than ours, isn't it? :-k
"Never memorize what you can easily find in a book".....Or Mibuso
My Blog
To me it looks like they are just using NAV as it comes without looking at the consequences. What was the reason that you didn't develop a custom tracking module for this? the problem is that ILE and VLE track tons of information that you don't need. So if you make a custom tracking module you only tack the information that is needed.
The standard ILE was not really available "out of the box"...it has been a hell to customize. But at the end of the day, i feel like it's pretty solid now.
P.S.: we took the decision to customize the standard about one year ago...i don't remember ALL the reasons, sorry...
"Never memorize what you can easily find in a book".....Or Mibuso
My Blog
well, I don't want to comment on the "sense & nonsense" of that issue in general - just dealing with the fact you have a ILE of a decent size (yes, I've seen bigger ILE!).
One Problem NAV has, that it e.g. calculated the inventory from all ILE - actually not directly, bt via VSIFT structures. Depending on the exact structure of some VSIFT those will grow with together with the ILE. Hence, there might be performance problems with the ILE and some problems with the VSIFT as well.
For SQL Server "size does not matter" - basically - this is primarily an issue of storage. Solving storage issues could be accomplished by ...
... minimizing the creation of ILE (like not migrating all that old stuff )
... clean-up and archiving (which is not available anymore with standard NAV)
... using "Data Compression" of SQL Server 2008 Enterprise (ILE is a quite good candidate!)
The real performance issues rise if SQL Server starts to Scan indexes - a scan means "sequentially reading the leaf node level of an index". For example like reading a phone book page-by-page-by-page... and so on. The bigger the phone book, the longer it takes to scan it. So in such a case "size matters" a lot!
Hence it is crucial to have sufficient indexing on the ILE (and VSIFT) and a high "discipline" of accessing those structures: means, you should not have a too big variety of different filter-constellations, as every filter might require an (different?) optimized index; it would be not feasible to have an index per every filter, as this would dramatically increase the table/index size and increase the "cost per record" when writing data to the table. Too many indexes will delay insert, update or delete transactions!
Thus, indexing is a matter of balance: adding indexes you really (!) need, and removing those which are not used.
The physical order of the records in ILE could also have impact on performance: the default "Clustered Index" on "Entry No." has advantages and disadvantages (see this thread about details).
In some cases it could improve performance to create a new "Key" on ILE by "Item No." and flag this one as "Clustered" - hence, the records will be sorted by "Item No.", "Entry No." which supports most of the filters applied on ILE (downside: the indexes get larger).
Of course, changing the CI is a heavy-load transaction taking long time for 9.7 million records; this should be tested throughly at first!
NAV/SQL Performance Optimization & Troubleshooting
STRYK System Improvement
The Blog - The Book - The Tool
if so, are you telling me it's not an issue?(i'm really asking it, i'm not hyronic )
For the other advices, i'm surely following them for what it does concern me. (they are "the always useful advices about hw, indexes, network, application" i was talking about in the first post). Also, I'm bothering my colleagues for what it concerns them (hw, network and so on...).
PS: i talked about this with the PM and he agreed that we should cut out some old entries. It won't be an easy work, because we have to carefully select what to delete to keep the serial no. history coherent (e.g. first entry must be THE FIRST load, then we delete some entries, and then we restart the ledger entries with an unload - to keep it simple)
As always, thanks everyone for the valuable advices
"Never memorize what you can easily find in a book".....Or Mibuso
My Blog
I would argue against that, since the comment
Means that Item No. is a very NON selective index. This would reduce the record set from 9.7 million to 7.3 million which would not be anywhere near enough to negate the over head in maintaining the index itself.
As I said in the beginning (and Belias seems to refuse to listen ), the big issue here is not the number of records, it's that Item No. will not be highly selective in this database and that is going to be where the system falls apart.
The only selective key will be serial number.
NAV/SQL Performance Optimization & Troubleshooting
STRYK System Improvement
The Blog - The Book - The Tool
I agree that a CI on "Item No." would have small/no benefit for queries running on that 75% Item, but I'm little concerned about the 25% other Items ...
Usually it's "Item No." which is part of most filters, thus a physical sorting on that could have advantages - but there can be disadvantages, too, so that's why I want to highlight to test such a change throughly. "Serial No." sounds pretty smart to me, indeed!
NAV/SQL Performance Optimization & Troubleshooting
STRYK System Improvement
The Blog - The Book - The Tool
and also, i was not caring about the number of ILE So, trust me...I've listened to you, as always! Serial no. is probably a good candidate to be a clusterd idx, we do a lot of queries on it. I've just to check how many entries without serial no. are being generated during the day to day work, because if they are a lot, my CI on serial no. can become less useful.
"Never memorize what you can easily find in a book".....Or Mibuso
My Blog
Anyway, misunderstandings apart: I listened to your advice (the first quote in this post), but even if you're correct -and i think you are- the logic behind the tracking we are working on is very complex, and changing the whole module we have developed on the top of the standard nav will be a pain in the a**, now. There's simply no time to rethink the whole module. This is not "solving the problem", but honestly i'm trying to focus on how to improve what we already did (and already works). We're not exploding ILE more than what nav standard already does, so i'm confident we'll be able to make it work as fast as possible: this is what i'm trying to achieve with you (mibuso) and my colleagues' help.
"Never memorize what you can easily find in a book".....Or Mibuso
My Blog
I can even imagine it gets worse that you may get the idea to start adding more fields to Item ledger Entry to track more serial number information.
In terms of performance with what you have got, Jorg has given all the advise, it's good advise, so follow it in detail. Monitor all the indexes very regularly. Maybe you will get lucky.
Oh and lots of disks and lots of RAM.
And good luck.
These tw quotes are connected, they do NOT contradict one another. You are creating too much redundant information by storing Serial numbers in the Item ledger entry table. Since the Item No is common and only the serial numbers are different, this causes very low selectivity on Item No, which traditionally is a highly selective key in Navision.
If you purchase 100 items on one order, and they all have the same cost, then you need just one Item Ledger entry, and 100 serial number entries.
If you purchase ONE item on a purchase order, and each time you purchase that item it has a different cost, then you have no option except to have 100 separate item ledger entries.
But anyway it looks like we are flogging a dead horse here, so lets just drop it. Do what you can with tuning and lets hope it all goes well. I assume you have already done performance testing with the existing 9.7 million entries and that it works fine now, so the key is just to do as much as you can to keep the level of performance you have now for the next few years.
Out of curiosity, how long does it take now to run Adjust Cost just on that one item?
except for purchase orders and internal transfers (those are just the 10-20% of the total entries), the other entries are generated by orders composed of few lines with quantity 1 or sales return order with few lines of quantity 1. (this is due to the peculiar business of our customer).
now, a separate table to store the tracking is useless (or better, counterproductive), because the result that will be generated by a "day-to-day" order of our customer will be, for example, 3 ledger entries with 3 item tracking in the new custom table.
With the NAV standard solution (that we adopted), we would just have 3 ledger entries with all the infos we need.
PS: yes, this is something I would have told at the beginning, but i didn't remembered it :oops:
"Never memorize what you can easily find in a book".....Or Mibuso
My Blog
In case your internal transfers are all (or mostly done) in 1 go, you can add a toggle to the transfer header to say it can skip the "In-Transit Location Code" and use directly the "Destination Location Code".
In the shipment posting, you can also call the receipt posting and in the receipt posting, you just skip the ILE-postings.
That saves a lot on ILE's!
We did this for a customer, because that customer has a lot of transfers where shipping and receiving are posted together and it works fine.
That customer has now 23M of ILE (after 2 years) and 27M of Value Entries and we don't have performance problems.
No PM,please use the forum. || May the <SOLVED>-attribute be in your title!
"Never memorize what you can easily find in a book".....Or Mibuso
My Blog
In this case, while posting the shipment, you automatically want to post the receipt.
How to do that?
In the shipment codeunit, just after finishing the shipment posting (in C5704), just before the COMMIT, you must run the Receipt posting (C5705).
But running the receipt posting will create extra ILE postings (from the transit-location to the to-location) you don't need anymore. So In C5705, you need to skip the ILE-posting for this kind of transfer orders.
No PM,please use the forum. || May the <SOLVED>-attribute be in your title!
"Never memorize what you can easily find in a book".....Or Mibuso
My Blog
any answer on that?
jokes apart, I'm investigating about it, i'll let you know as soon as possible. In my opinion it will take a ton of time or it will even error out due to the 2GB limitation (it can happen with big ILE, if i remembered). :-k
"Never memorize what you can easily find in a book".....Or Mibuso
My Blog
I am quite curious about this. Whilst its not uncommon to see this total volume of ILE (10-30 M), I can't remember having seen this many ILE (7.5M) for one Item.
Is that 23M for One item? Or is that total ILE? Again I am really curious how long it takes to adjust that one item.
How big is the total DB?