3.3.2 The EntireTable Cache
In addition to using the three caching methods described so far—Found, FoundAndEmpty, and NotInTTS—you can set a fourth caching option, EntireTable, on a table. EntireTable
enables a set-based cache. It causes the AOS to mirror the table in the
database by selecting all records in the table and inserting them into
a temporary table when any record from the table is selected for the
first time. The first process to read from the table could therefore
experience a longer response time because the application runtime reads
all records from the database. Subsequent select queries then read from the entire-table cache instead of from the database.
A
temporary table is usually local to the process that uses it, but the
entire-table cache is shared among all processes that access the same
AOS. Each company (as defined by the DataAreaId
field) has an entire-table cache, so two processes requesting records
from the same table but from different companies use different caches,
and both could experience a longer response time to instantiate the
entire-table cache.
The entire-table cache is a server-side cache only. When requesting
records from the client tier on a table that is entire-table cached,
the table behaves as a Found cached
table. If a request for a record is made on the client tier that
qualifies for searching the record cache, the client first searches the
local Found cache. If the record isn’t found, the client calls the AOS
to search the entire-table cache. When the application runtime returns
the record to the client tier, it inserts the record into the
client-side Found cache.
The entire-table cache isn’t used when executing a select
statement by which an entire-table-cached table is joined to a table
that isn’t entire-table cached. In this situation, the entire select statement is parsed to the database. However, when select
statements are made that access only the single entire-table cached
table, or when joining other entire-table cached tables, the
entire-table cache is used.
The Dynamics
AX application runtime flushes the entire-table cache when records are
inserted, updated, or deleted in the table. The next process, which
selects records from the table, suffers a degradation in performance
because it must reread the entire table into the cache. In addition to
flushing its own cache, the AOS that executes the insert, update, or
delete also informs other AOSs in the same installation that they must
flush their caches on the same table. This prevents old and invalid
data from being cached for too long in the entire Dynamics AX
application environment. In addition to this flushing mechanism, the
AOS flushes all the entire-table caches every 24 hours.
Because
of the flushing that results when modifying records in a table that has
been entire-table cached, you should avoid setting up entire-table
caches on frequently updated tables. Rereading all records into the
cache results in a performance loss, which could outweigh the
performance gain achieved by caching records on the server tier and
avoiding round-trips to the database tier. You can overwrite the
entire-table cache setting on a specific table at run time when you
configure the Dynamics AX application.
Even
if the records in a table are fairly static, you might achieve better
performance by not using the entire-table cache if the table has a
large number of records. Because the entire-table cache uses temporary
tables, it changes from an in-memory structure to a file-based
structure when the table uses more than 128 kilobytes (KB) of memory.
This results in performance degradation during record searches. The
database search engines have also evolved over time and are faster than
the ones implemented in the Dynamics AX application runtime. It might
be faster to let the database search for the records than to set up and
use an entire-table cache, even though a database search involves
round-trips to the database tier.
3.3.3 The RecordViewCache Class
The RecordViewCache class allows you to establish a set-based cache from the X++ code. You initiate the cache by writing the following X++ code.
select nofetch custTrans where custTrans.accountNum == '1101'; recordViewCache = new RecordViewCache(custTrans);
|
The records to cache are described in the select statement, which must include the nofetch keyword to prevent the selection of the records from the database. The records are selected when the RecordViewCache object is instantiated with the record buffer parsed as a parameter. Until the RecordViewCache object is destroyed, select statements will execute on the cache if they match the where clause defined when it was instantiated. The following X++ code shows how the cache is instantiated and used.
static void RecordViewCache(Args _args) { CustTrans custTrans; RecordViewCache recordViewCache; ; select nofetch custTrans // Define records to cache. where custTrans.AccountNum == '1101';
recordViewCache = new RecordViewCache(custTrans); // Cache the records.
select firstonly custTrans // Use cache. where custTrans.AccountNum == '1101' && custTrans.CurrencyCode == 'USD'; }
|
The cache can be instantiated only on the server tier. The defined select statement can contain only equal-to (==) predicates in the where
clause and is accessible only by the process instantiating the cache
object. If the table buffer used for instantiating the cache object is
a temporary table or it uses EntireTable caching, the RecordViewCache object isn’t instantiated.
The
records are stored in the cache as a linked list of records. Searching
therefore involves a sequential search of the cache for the records
that match the search criteria. When defining select
statements to use the cache, you can specify a sort order. When a sort
order is specified, the Dynamics AX application runtime creates a
temporary index on the cache, which contains the requested records
sorted as specified in the select
statement. The application runtime iterates the temporary index when it
returns the individual rows. If no sorting is specified, the
application runtime merely iterates the linked list.
If the table cached in the RecordViewCache is also record-cached, the application runtime can use both caches. If a select statement is executed on a Found cached table and the select statement qualifies for lookup in the Found cache, the application runtime performs a lookup in this cache first. If nothing is found and the selectRecordViewCache, the runtime uses the RecordViewCache and updates the Found cache after retrieving the record. statement also qualifies for lookup in the
Inserts,
updates, and deletes of records that meet the cache criteria are
reflected in the cache at the same time that the data manipulation
language (DML) statements are sent to the database. Records in the
cache are always inserted at the end of the linked list. A hazard
associated with this behavior is that an infinite loop can occur when
application logic is iterating the records in the cache and at the same
time inserting new records that meet the cache criteria. An infinite
loop is shown in the following X++ code example, in which a RecordViewCache object is created containing all custTable records associated with CustGroup ‘20’. The code iterates each record in the cache when executing the select statement, but because each cached record is duplicated and still inserted with CustGroup ‘20’, the records are inserted at the end of the cache. Eventually, the loop fetches these newly inserted records as well.
static void InfiniteLoop(Args _args) { CustTable custTable; RecordViewCache recordViewCache; custTable custTableInsert; ; select nofetch custTable // Define records to cache. where custTable.CustGroup == '20'; recordViewCache = new RecordViewCache(custTable); // Instantiate cache.
ttsbegin; while select custTable // Loop over cache. where custTable.CustGroup == '20' { custTableInsert.data(custTable); custTableInsert.AccountNum = 'dup'+custTable.AccountNum; custTableInsert.insert(); // Will insert at end of cache. // Records will eventually be selected. } ttscommit; }
|
To
avoid the infinite loop, simply sort the records when selecting them
from the cache, which creates a temporary index that contains only the
records in the cache from where the records were first retrieved. Any
inserted records are therefore not retrieved. This is shown in the
following example, in which the order by operator is applied to the select statement.
static void FiniteLoop(Args _args) { CustTable custTable; RecordViewCache recordViewCache; custTable custTableInsert; ; select nofetch custTable // Define records to cache. where custTable.CustGroup == '20'; recordViewCache = new RecordViewCache(custTable); // Instantiate cache.
ttsbegin; while select custTable // Loop over a sorted cache. order by CustGroup // Create temporary index. where custTable.CustGroup == '20' { custTableInsert.data(custTable); custTableInsert.AccountNum = 'dup'+custTable.AccountNum; custTableInsert.insert(); // Will insert at end of cache. // Records are not inserted in index. } ttscommit; }
|
Changes made to records in a RecordViewCache object can’t be rolled back. If one or more RecordViewCache objects exist, if the ttsabort operation executes, or if an error is thrown that results in a rollback of the database, the RecordViewCache objects still contain the same information. Any instantiated RecordViewCache
object that is subject to modification by the application logic should
therefore not have a lifetime longer than the transaction scope in
which it is modified. The RecordViewCache
object must be declared in a method that isn’t executed until after the
transaction has begun. In the event of a rollback, the object and the
cache are both destroyed.
As described earlier, the RecordViewCache
object is implemented as a linked list that allows only a sequential
search for records. When you use the cache to store a large number of
records, a performance degradation in search occurs because of this
linked-list format. You should weigh the use of the cache against the
extra time spent fetching the records from the database where the
database uses a more optimal search algorithm. Consider the time hit
especially when you search only for a subset of the records; the
application runtime must continuously match each record in the cache
against the more granular where clause in the select statement because no indexing is available for the records in the cache.
However, for small sets of records, or for situations in which the same records are looped multiple times, RecordViewCache offers a substantial performance advantage compared to fetching the same records multiple times from the database.