Large number of fields in a file affect performance

Date:Archived
Product/Release:LANSA for the AS/400
Abstract:The number of fields in a file can affect application performance but only if very large numbers are involved
Submitted By:LANSA Technical Support

Using files with very large numbers of fields (more than say 200) can affect application performance where I/O modules are involved. The large number of fields causes very large I/O modules to be created. This can in turn affect the number of logical views that can be defined on the file and accessed via the I/O modules. In extreme circumstances of say more than 500 fields and more than 20 logical views the I/O module will become too large to compile and the user will be forced to use the file with no I/O module and use FUNCTION OPTIONS(*DBOPTIMISE) in all functions that access the file.

Please note that the reference here is to the number of fields in the file ... not to the net file record length. However, because LANSA is currently RPG/400 based, the net record length cannot exceed 9999 bytes - the RPG imposed limit, which is substantially less than the OS/400 limit of approximately 32K.

Performance impacts in *DBOPTIMISE routines that access files with a very large number of fields could be expected to be minimal as the generated code only caters for the fields actually used in the program .. unlike an I/O module, which must cater for all the fields in the file. Minimal impact would mainly be expected in an increase of the time taken to generate and compile the *DBOPTIMISE program.

Additionally, all LANSA I/O commands only allow 100 fields to be processed at one time .. not to just the first 100 fields in the file. Normally this only causes slight complications on the INSERT command, as all other I/O commands are largely unaffected unless the program needs to retrieve/update more than 100 fields in one go.

The use of OTHER files with very large numbers of fields and very large numbers of logical views cannot be avoided in some situations, and in extreme cases reversion to total *DBOPTIMISE usage may be required.

However it is strongly recommended that new data base files being established should not contain more than 100 - 200 fields. If necessary segment the file record across multiple files. This approach can be advantageous if some parts of the file record are optional, saving on overall disk space usage.

Additionally, investigate defining array structures as one long field rather than as individual entries. This technique can substantially reduce the number of fields in a file, while it only slightly complicates programming.