Use of the tunable parameters in the UVCONFIG file
How do I use the tunable parameters in the UVCONFIG file?
The most commonly used parameters in the UVCONFIG file are as follows:
This parameter defines the size of the server engine (DSEngine) rotating file pool. This is a per process pool for files such as sequential files that are opened by the DataStage server runtime. It does not include files opened directly at OS level by the parallel engine (PXEngine running osh).
The server engine will logically open and close files at the DataStage application level and physically close them at the OS level when the need arises.
Increase this value if DataStage jobs use a lot of files. Generally, a value of around 250 is suitable. If the value is set too low, then performance issues may occur, as the server engine will make more calls to open and close at the physical OS level in order to map the logical pool to the physical pool.
NOTE: The OS parameter of nofiles needs to be set higher than MFILES. Ideally, it would be recommended that nofiles be at least 512.
This will allow the DataStage process to open up to 512 – (MFILES + 8 ) files.
On most UNIX systems, the proc file system can be used to monitor the file handles opened by a given process; for example:
ps -ef|grep dsrpcd
root 23978 1 0 Jul08 ? 00:00:00 /opt/ds753/Ascential/DataStage/DSEngine/bin/accdsrpcd
ls -l /proc/23978/fd
lrwx------ 1 root dstage 64 Sep 25 08:24 0 -> /dev/pts/1 (deleted)
l-wx------ 1 root dstage 64 Sep 25 08:24 1 -> /dev/null
l-wx------ 1 root dstage 64 Sep 25 08:24 2 -> /dev/null
lrwx------ 1 root dstage 64 Sep 25 08:24 3 -> socket:
So the dsrpcd process (23978) has four files open.
This parameter determines the maximum number of dynamic hash files that can be opened system-wide on the DataStage system.
If this value is too low, expect to find an error message similar to ‘T30FILE table full’.
The following engine command, executed from $DSHOME, shows the number of dynamic files in use:
echo "`bin/smat -d|wc -l` - 3"|bc
Use this command to assist with tuning the T30FILE parameter.
Every running DataStage job requires at least 3 slots in this table. (RT_CONFIG, RT_LOG, RT_STATUS). Note, however, that multi-instance jobs share slots for these files, because although each job run instance creates a separate file handle, this just increments a usage counter in the table if the file is already open to another instance.
Note that on AIX the T30FILE value should not be set higher than the system setting ulimit -n.
This parameter defines the size of a row in the group lock table.
Tune this value if the number of group locks in a given slot is getting close to the value defined.
Use the LIST.READU EVERY command from the server engine shell to assist with monitoring this value. LIST.READU lists the active file and record locks; the EVERY keyword lists the active group locks in addition.
For example, with a Designer client and a Director client both logged in to a project named “dstage0”:
- Device A number that identifies the logical partition of the disk where the file system is located.
- Inode A number that identifies the file that is being accessed.
- Netnode A number that identifies the host from which the lock originated. 0 indicates a lock on the local machine, which will usually be the case for DataStage. If other than 0, then on Unix it is the last part of the TCP/IP host number specified in the /etc/hosts file; on Windows it is either the last part of the TCP/IP host number or the LAN Manager node name, depending on the network transport used by the connection.
- Userno The phantom process that set the lock.
- Pid A number that identifies the controlling process.
- Item-ID The record ID of the locked record.
- Lmode The number assigned to the lock, and a code that describes its use.
When the report describes file locks (not shown here), it contains the following additional information:
- Lmode codes described shared or exclusive file locks, rarely seen in normal DataStage use:
FS, IX, CR Shared file locks.
FX, XU, XR Exclusive file locks.
When the report describes group locks, it contains the following additional information:Lmode codes are:
EX Exclusive lock
SH Shared lock.
RD Read lock.
WR Write lock.
IN System information lock
- G-Address Logical disk address of group, or its offset in bytes from the start of the file, in hex.
- Record Locks The number of locked records in the group.
- Group RD Number of readers in the group.
- Group SH Number of shared group locks.
- Group EX Number of exclusive group locks.
When the report describes record locks, it contains the following additional information:Lmode codes are:
RL Shared record lock
RU Update record lock
This parameter defines the size of a row in the record lock table.
From a DataStage job point of view, this value affects the number of concurrent DataStage jobs that can be executed, and the number of DataStage Clients that can connect.
Use the LIST.READU command from the DSEngine shell to monitor the number of record locks in a given slot (see previous section for use of the EVERY keyword).
For example, with one Director client logged in to a project named “dstage0”, and 2 instances of a job in that project that are running:
In the above report, Item-ID=RT_CONFIG456 identifies that the running job is an instance of job number 456, whose compiled job file is locked while the instance is running so that, for example, it cannot be re-compiled in that time. A job’s number within its project can be seen in the Director job status view – Detail dialog – for a particular job.
The unnamed column in-between UserNo and Lmode relates to a row number within the Record Lock table. Each row can hold RLTABSZ locks. In the above example, 3 slots out of 75 (Default value for RLTABSZ) have been used for row 62. When the number of entries for a given row gets close to the RLTABSZ value, it is time to consider re-tuning the system.
Jobs can fail to start, or generate -14 errors, if RLTABSZ is being reached.
DataStage Clients may see an error message similar to ‘DataStage Project locked by Administrator’ when attempting to connect. Note that the error message can be misleading – it means in this case that a lock cannot be acquired because the lock table is full, and not because another user already has the lock.
This should always be set to the value of RLTABSZ – 1.
Each DSD.RUN process takes a record lock on a key name <project>&!DS.ADMIN!& of the UV.ACCOUNT file in $DSHOME (as seen in the examples above). Each DataStage client connection (for example, Designer, Director, Administrator, dsjob command) takes this record lock as well. This is the mechanism by which DataStage determines whether operations such as project deletion are safe, operations cannot proceed while a project lock is held by any process.
MAXRLOCK needs to be set to accommodate the maximum # of jobs and sequences plus client connections that will be used at any given time. And RLTABSZ needs to be set to MAXRLOCK + 1.
Keep in mind that changing RLTABSZ greatly increases the amount of memory needed by the disk shared memory segment.
Current Recommended Settings
Customer Support has reported in the past that using settings of 130/130/129 (for RLTABSZ/GLTABSZ/MAXRLOCK, respectively) work successfully on most customer installations. There have been reports of high-end customers using settings of 300/300/299, so this is environment specific.
If sequencers or multi-instance jobs are used, start with the recommended settings of 130/130/129, and increase to 300/300/299 if necessary.
Prior to DataStage v8.5 the following settings were pre-defined:
MFILES = 150
T30FILE = 200
GLTABSZ = 75
RLTABSZ = 75
MAXRLOCK = 74 (that is, 75-1)
DataStage v8.5 has the following settings pre-defined:
MFILES = 150
T30FILE = 512
GLTABSZ = 75
RLTABSZ = 150
MAXRLOCK = 149 (that is, 150-1)
However, note that these are the lowest suggested values to accommodate all system configurations, so tuning of these values is often necessary.
DMEMOFF, PMEMOFF, CMEMOFF, NMEMOFF
These are the shared memory address offset values for each of the four DataStage shared memory segments (Disk, Printer, Catalog, NLS). Depending upon the platform, PMEMOFF, CMEMOFF & NMEMOFF will need to be increased to allow for a large disk shared memory to be used.
Where these values are set to 0×0 (on AIX for example), the OS takes care of managing these offsets. Otherwise, the PMEMOFF – DMEMOFF = largest disk shared memory segment size.
Additionally, on Solaris for example, these values will be increased to allow for a greater heap size for the running DataStage job.
Note that when running the shmtest utility, great care must be taken with interpreting its output. The utility tests the availability of memory that it can allocate at the time it runs, and this will be affected both by the current uvconfig settings, how much shared memory is already in use, and other activity on the machine at the time.