As400 Interview Questions and Answers

What is AS/400?

AS/400 (Application System/400) is a midrange computer system developed by IBM in 1988, designed for businesses requiring high reliability, security, and scalability. It runs on IBM's proprietary OS/400 operating system (later evolved into IBM i) and is known for its integrated database (DB2 for i), object-based architecture, and seamless upward compatibility.


How AS/400 Differs from Other IBM Systems :
Feature AS/400 (IBM i) IBM Mainframes (zSeries) IBM Power Systems (AIX/Linux)
OS OS/400 (IBM i) z/OS, z/VSE, z/VM AIX (Unix), Linux
Architecture Object-based, integrated DB Batch-oriented, highly scalable Unix-based, open-source friendly
Target Market Midrange businesses Large enterprises, financial institutions Enterprise computing, AI, HPC
Database DB2 for i (integrated) DB2, IMS, others DB2, Oracle, MySQL
Security Built-in security, role-based Extensive security layers Unix/Linux-based security models
User Interface Green screen (5250), web-enabled Batch processing, CICS, TSO GUI, SSH, terminal
Virtualization Logical partitioning (LPAR) PR/SM for LPARs PowerVM for AIX/Linux

Key Differences :
  1. Simplicity & Integration: AS/400 (IBM i) is known for its all-in-one system, including OS, database, security, and middleware, unlike mainframes and AIX/Linux systems, which often require separate configurations.
  2. Object-Based Architecture: Unlike traditional Unix-based systems, IBM i uses an object-based file system, which enhances security and system integrity.
  3. Backward Compatibility: IBM i systems maintain seamless compatibility with older applications, while mainframes and AIX systems may require more migration efforts.
  4. Use Cases: AS/400 (IBM i) is widely used in industries like manufacturing, banking, and retail, whereas mainframes dominate high-volume transaction processing and AIX/Linux systems are used for cloud, AI, and enterprise applications.

The IBM AS/400 (now IBM i) is a highly integrated system designed for business computing. Its key components include:

1. Hardware Components :
  • Processor (CPU): Uses IBM Power architecture (earlier used CISC, later migrated to RISC-based Power processors).
  • Memory (RAM): Supports virtual memory management and efficient workload handling.
  • Disk Storage (DASD - Direct Access Storage Device): Utilizes RAID for data redundancy and reliability.
  • Workstations & Terminals: Often uses 5250 terminals for interactive sessions.
  • Communication Hardware: Includes Ethernet, TCP/IP, and legacy SNA (Systems Network Architecture) for connectivity.
2. Operating System (OS/400 → IBM i) :
  • Object-Based Architecture: Everything (files, programs, devices) is treated as an object, enhancing security and stability.
  • Integrated Security: Built-in security features such as user profiles, role-based access, and encryption.
  • Automatic Storage Management: The system automatically handles storage allocation and retrieval.
3. Database (DB2 for i) :
  • Built-in Relational Database: DB2 for i (formerly known as DB2/400) is tightly integrated with the OS, ensuring high performance.
  • Single-Level Storage: Applications and data reside in the same addressable memory space, improving efficiency.
  • Native SQL Support: Allows seamless interaction with modern databases.
4. Programming & Development Tools :
  • RPG (Report Program Generator): The primary programming language for AS/400 business applications.
  • COBOL, CL (Control Language), Java, C, C++: Supports multiple languages for development.
  • IBM Rational Developer for i (RDi): A modern development environment for IBM i applications.
  • Integrated File System (IFS): Supports hierarchical file structures, enabling compatibility with UNIX/Linux and Windows file systems.
5. User Interface & Access Methods :
  • Green Screen Interface (5250 Emulator): Traditional text-based terminal interface.
  • IBM Navigator for i: Web-based GUI for system administration.
  • Client Access & IBM Access Client Solutions (ACS): Provides remote connectivity from Windows, Linux, and Mac.
6. Networking & Communication :
  • TCP/IP, FTP, Telnet: Supports modern networking protocols.
  • SNA (Systems Network Architecture): Legacy communication for older systems.
  • ODBC/JDBC Connectivity: Allows database access from external applications.
7. System Management & Virtualization :
  • Logical Partitioning (LPAR): Enables multiple virtual instances of IBM i on the same hardware.
  • IBM PowerVM: Virtualization technology for running multiple OS instances (IBM i, AIX, Linux).
  • BRMS (Backup, Recovery & Media Services): Comprehensive backup and disaster recovery solution.
8. Modernization & Cloud Capabilities :
  • Web Services & APIs: Supports RESTful and SOAP-based web services.
  • Open-Source Support: Compatible with Python, PHP, Node.js, and Git.
  • IBM Cloud & Hybrid Integration: Can integrate with cloud services like IBM Cloud and AWS.

IBM developed the System/36 (S/36) and System/38 (S/38) as midrange computers in the late 1970s and early 1980s. The AS/400 (IBM i) was introduced in 1988 to unify and replace these systems while maintaining backward compatibility.

Feature System/36 (S/36) System/38 (S/38) AS/400 (IBM i)
Release Year 1983 1979 1988 (IBM i continues today)
Target Audience Small businesses Mid-to-large enterprises Businesses of all sizes
Operating System SSP (System Support Program) CPF (Control Program Facility) OS/400 → IBM i
Architecture 16-bit, simple file system 48-bit, advanced database & security 48-bit (initially), later IBM Power
Database Flat file-based Relational-like, integrated DB DB2 for i (fully relational)
Security Basic user profiles Object-based security model Enhanced role-based security
Programming Languages RPG II, COBOL, OCL RPG III, COBOL, CL RPG IV, COBOL, C, Java, SQL, PHP
User Interface Green screen (5250 terminals) Green screen (5250 terminals) Green screen, Web UI, APIs
Virtualization None None Logical Partitioning (LPAR), PowerVM

1. IBM System/36 (S/36) :
  • Designed for small-to-medium businesses with a focus on batch processing.
  • Used SSP (System Support Program) as its OS.
  • Emphasized ease of use with flat-file data storage (not fully relational).
  • Programming: RPG II, COBOL, and OCL (Operational Control Language).
  • Lacked advanced security and database management found in later systems.

* Strengths: Simple, cost-effective, and widely adopted in small businesses.
* Weaknesses: Lacked a modern database and security model.


2. IBM System/38 (S/38) :
  • Aimed at larger enterprises with a more sophisticated computing model.
  • Used CPF (Control Program Facility) OS with an object-based architecture.
  • Integrated an advanced relational-like database (precursor to DB2 for i).
  • Introduced security by design, with user-based and object-level access control.
  • Programming: RPG III, COBOL, and Control Language (CL).

* Strengths: Advanced security, integrated DB, and forward-looking architecture.
* Weaknesses: Expensive and complex compared to S/36.


3. IBM AS/400 (Now IBM i) :
  • Launched in 1988 as a successor to both System/36 and System/38.
  • Combined S/36's ease of use with S/38's advanced security and database.
  • Backward compatible with S/36 and S/38 applications.
  • Introduced OS/400, later rebranded as IBM i, with a fully integrated DB2.
  • Supports modern programming languages: RPG IV, SQL, Java, C, Python, PHP.
  • Advanced virtualization (LPAR) and cloud capabilities.

* Strengths: Scalable, secure, highly reliable, backward-compatible.
* Weaknesses: Perceived as outdated due to its green screen UI, but modernized significantly.

The Licensed Internal Code (LIC) is a low-level firmware-like layer in IBM i (formerly AS/400) that acts as an interface between the hardware and the operating system (OS/400 or IBM i). It is essential for the system’s stability, performance, and security.

Key Functions of the LIC :
  1. Hardware Abstraction :
    • LIC isolates the OS from direct hardware dependencies, enabling seamless hardware upgrades without affecting applications.
    • Ensures IBM i runs on different generations of IBM Power Systems without major software changes.
  2. System Initialization & Control :
    • Manages the startup process, including hardware diagnostics and system integrity checks.
    • Controls firmware-level functions like memory management and I/O operations.
  3. Object-Based Architecture Support :
    • IBM i uses an object-based system where everything (files, programs, devices) is treated as an object.
    • LIC enforces object-level security and integrity, preventing corruption and unauthorized access.
  4. Single-Level Storage Management :
    • Implements IBM i’s Single-Level Storage (SLS) model, where RAM and disk storage are treated as a single address space for efficient memory management.
    • Automatically handles paging and storage allocation.
  5. Database Integration :
    • LIC is responsible for the tight integration of DB2 for i with the OS, optimizing database performance and reliability.
    • Allows applications to access the database without requiring separate database management software.
  6. Security & Virtualization :
    • Enforces system security policies, including encryption and authentication.
    • Supports Logical Partitioning (LPAR) for running multiple instances of IBM i, AIX, or Linux on the same hardware.

Why is LIC Important?
  • Provides stability and performance optimization for IBM i systems.
  • Ensures backward compatibility, allowing old applications to run on new hardware.
  • Reduces system complexity, as many low-level functions are handled automatically.

Difference Between LIC and OS/400 (IBM i) :
Feature LIC (Licensed Internal Code) OS/400 (IBM i)
Role Low-level firmware-like layer Full operating system
User Interaction Not directly accessible Directly used by admins and developers
Functionality Manages hardware, security, and virtualization Provides UI, database, networking, and application support
Storage Model Controls Single-Level Storage (SLS) Uses SLS for file and memory management

There are several ways to check the OS version of an AS400 system (which is now more commonly referred to as IBM i):

1. Using the DSPPTF command:

  • This is generally considered the most reliable method.
  • Type DSPPTF on the command line and press Enter.
  • The OS version will be displayed at the top of the screen. For example, you might see something like "V7R1M0", which means Version 7 Release 1 Modification level 0.

2. Using the GO LICPGM command:

  • Type GO LICPGM on the command line and press Enter.
  • Select option 10 (Display installed licensed programs).
  • Press F11 (Display release).
  • Look for the entry with the description "Operating System/400". The version will be displayed in the Release column.

3. Using the DSPSFWRSC command:

  • Type DSPSFWRSC on the command line and press Enter.
  • This will display all software resources installed on the system.
  • Press F11 to display the release level for each resource.
  • Look for the entry related to the operating system to find the version.

4. Checking spool files:

  • The first line of most spool files contains the OS version.
  • You can use the WRKSPLF command to view your spool files.

Important Notes:

  • AS400 is an outdated term: The system has evolved over the years and is now officially called IBM i. However, many people still use the term AS400.
  • Tech Refreshes (TRs): Starting with version 7.1, IBM introduced Tech Refreshes to provide more frequent updates. So, in addition to the version number, you might also need to know the TR level to determine the exact capabilities of the OS. You can find the TR level using the DSPPTF 5770999 command.
  • DB2 is integrated: The database system (DB2) is an integral part of the IBM i operating system, not a separate product. Therefore, you don't need to check the DB2 version separately. The OS version will indicate the DB2 capabilities.

Subsystems are a fundamental concept in IBM i (formerly AS400) that provide a way to manage and control the execution of jobs (programs and processes). Think of them as separate environments within the operating system, each with its own set of resources and rules.

Here's a breakdown of how subsystems work:

1. Subsystem Descriptions:

  • Each subsystem is defined by a subsystem description object. This object contains information about the subsystem, such as:
    • The maximum number of jobs that can run concurrently in the subsystem.
    • The storage pools (memory areas) that the subsystem can use.
    • The job queues associated with the subsystem.
    • The classes that define the runtime attributes of jobs entering the subsystem.

2. Job Queues:

  • Job queues are like waiting areas for jobs that are ready to be executed.
  • When a job is submitted to the system, it is placed in a job queue.
  • Each subsystem is associated with one or more job queues.

3. Routing Entries:

  • Routing entries define how jobs are selected from job queues and assigned to the subsystem for execution.
  • They specify criteria such as the job name, user profile, or job type.

4. Classes:

  • Classes define the runtime attributes of jobs, such as their priority, time slice, and memory allocation.
  • Each job is associated with a class.

How it all works together:

  1. When a job is submitted, it is placed in a job queue.
  2. The subsystem monitors its associated job queues for jobs that match its routing entries.
  3. When a matching job is found, the subsystem selects it for execution.
  4. The subsystem uses the job's class to determine its runtime attributes.
  5. The job is then executed within the subsystem's environment, using the resources defined in the subsystem description.

Benefits of using subsystems:

  • Resource management: Subsystems allow you to allocate system resources (CPU, memory, etc.) to different types of work. For example, you might have one subsystem for interactive jobs and another for batch jobs.
  • Performance tuning: By configuring subsystems appropriately, you can optimize the performance of different workloads.
  • Security: Subsystems can be used to isolate different types of work, which can improve security.
  • System availability: If one subsystem fails, it does not necessarily affect other subsystems.

Examples of common subsystems:

  • QINTER: Handles interactive jobs (user sessions).
  • QBATCH: Handles batch jobs (programs that run without user interaction).
  • QCMN: Handles communication jobs (network-related tasks).
  • QSYSWRK: Handles system work (background processes).
Memory Management in AS/400 (IBM i) :

IBM i (formerly AS/400) uses an advanced Single-Level Storage (SLS) model for memory management, making it unique compared to traditional operating systems like Windows, Linux, or Unix. This model simplifies storage access, optimizes performance, and enhances system reliability.

1. Key Features of Memory Management in AS/400 (IBM i) :
1.1. Single-Level Storage (SLS) :
  • In IBM i, RAM and disk storage are treated as a single, unified address space.
  • Programs, data, and objects do not need to know whether they reside in physical memory (RAM) or disk (DASD)—the system automatically manages this.
  • This approach eliminates the need for complex file paths, as objects are accessed by their system-wide unique address.
1.2. Automatic Paging & Virtual Memory :
  • IBM i does not use traditional virtual memory with swap files. Instead, it dynamically pages data between RAM and disk.
  • Frequently accessed data remains in RAM, while less-used data is stored on disk.
  • The Page Fault Manager ensures efficient memory retrieval when needed.
1.3. Pool-Based Memory Allocation :
  • Memory is divided into storage pools, each allocated for specific workloads.
  • Storage pools can be system-defined or user-defined, allowing administrators to optimize performance.
  • Pools can be adjusted dynamically to allocate more memory to critical processes.
1.4. Object-Based Memory Management :
  • IBM i manages memory at the object level rather than using traditional file-based access.
  • Objects cannot be directly modified in memory, which prevents corruption and enhances security.
  • Only the OS (IBM i) can allocate and manage memory for objects.
1.5. Automatic Garbage Collection & Reclamation :
  • IBM i continuously monitors and reclaims unused memory automatically.
  • No need for manual memory defragmentation or complex garbage collection routines.

2. Components of IBM i Memory Management :
Component Function
Main Storage (RAM) Holds active jobs, programs, and system data.
Disk Storage (DASD - Direct Access Storage Device) Used for persistent storage and automatic paging.
Storage Pools (Subsystems) Divides memory into pools for different workloads.
Page Fault Manager Handles paging between RAM and disk.
Automatic Storage Reclamation Frees up memory automatically when not in use.


3. How IBM i (AS/400) Differs from Traditional OS Memory Management :
Feature IBM i (AS/400) Windows/Linux/Unix
Storage Model Single-Level Storage (SLS) Separate RAM and disk management
Paging Mechanism Automatic, no swap files Uses virtual memory and swap files
Memory Fragmentation Low, due to object-based model High, requires defragmentation
Performance Optimization Dynamic memory pools Fixed allocations or manual tuning
* Simplifies administration : No need for complex memory configurations.

* Improves performance : Efficient paging and storage pool management.

* Enhances reliability : Reduces memory corruption and fragmentation.

* Ensures security : Object-based memory model prevents unauthorized modifications.

A library in AS/400 (IBM i) is a system-level object that organizes and stores other objects such as programs, files, and data. It functions similarly to a directory or folder in other operating systems but has enhanced security, object-based management, and integrated access control.

How Does a Library Work in AS/400?

1. Structure of Libraries :
  • Libraries are part of the Integrated File System (IFS) in IBM i.
  • Unlike traditional directories, libraries store objects, not just files.
  • Each object type (e.g., programs, physical files, logical files, message queues) has system-defined attributes and cannot be modified like standard files.
2. Library List (LIBL) :
  • IBM i uses a Library List (LIBL) to determine which libraries are searched when executing commands or programs.
  • The library list follows a specific order:
    1. System Libraries (e.g., QSYS, QGPL) – IBM-supplied libraries.
    2. Current Library (CURLIB) – User’s primary working library.
    3. User Libraries – Custom application and data libraries.
    4. Product Libraries – Third-party or IBM-supplied software libraries.
3. Types of Libraries :
Library Type Purpose
QSYS System library containing IBM-supplied objects.
QGPL General purpose library for shared use.
QTEMP Temporary library that exists only for the job duration.
User Libraries Custom libraries created by users to store programs and data.
Product Libraries Contain third-party or IBM software components.
4. Library Security & Authority :
  • IBM i provides object-level security for libraries.
  • Authorities include read, write, execute, delete, and management permissions.
  • Security settings can be assigned at the user, group, or system level.


Commands for Managing Libraries :

Command Function
CRTLIB Create a new library.
WRKLIB Work with libraries (view/manage).
DSPLIB Display library contents.
DLTLIB Delete a library.
CHGLIB Change library attributes.

Example :
To create a library called MYLIB, use :

CRTLIB LIB(MYLIB)

To add MYLIB to the current session’s library list :

ADDLIBL LIB(MYLIB)

 

Advantages of Using Libraries in IBM i :

* Efficient Organization – Stores objects logically for easy access.
* Security & Access Control – Provides granular control over who can access or modify objects.
* Performance Optimization – System searches objects based on the Library List (LIBL), improving efficiency.
* Backward Compatibility – Applications remain functional across system upgrades.

Control Language (CL) commands are used in IBM i (AS/400) to manage system operations, files, programs, and user interactions. These commands help automate tasks, manage jobs, control system resources, and handle user interactions.

1. Library & Object Management Commands :
Command Function
CRTLIB LIB(LIBNAME) Create a new library.
WRKLIB LIB(LIBNAME) Work with libraries (view/manage).
DSPLIB LIB(LIBNAME) Display contents of a library.
DLTLIB LIB(LIBNAME) Delete a library.
CRTDTAARA DTAARA(DTANAME) TYPE(*CHAR) LEN(10) Create a data area.
WRKOBJ OBJ(*ALL/*ALL) Work with objects.
DSPOBJD OBJ(LIBNAME/*ALL) OBJTYPE(*ALL) Display object descriptions.

2. File & Data Management Commands :
Command Function
CRTPF FILE(LIBNAME/FILENAME) RCDLEN(100) Create a physical file.
CRTLF FILE(LIBNAME/FILENAME) SRCFILE(LIB/SRCFILE) Create a logical file.
DSPPFM FILE(LIBNAME/FILENAME) Display records in a physical file.
WRKF FILE(LIBNAME/FILENAME) Work with database files.
CPYF FROMFILE(FILE1) TOFILE(FILE2) MBROPT(*REPLACE) Copy file contents.
RGZPFM FILE(LIBNAME/FILENAME) Reorganize a file to remove deleted records.

3. Job & Work Management Commands
Command Function
WRKACTJOB Work with active jobs.
WRKJOB JOB(JOBNAME) Work with a specific job.
ENDJOB JOB(JOBNAME) End a job.
SBMJOB CMD(CALL PGM(PGMNAME)) JOB(JOBNAME) Submit a job to batch processing.
CHGJOB JOB(JOBNAME) RUNPTY(30) Change job priority.
WRKUSRJOB USER(USERNAME) View jobs for a specific user.

4. Program Execution Commands :
Command Function
CALL PGM(PGMNAME) Call and execute a program.
SBMJOB CMD(CALL PGM(PGMNAME)) Submit a program to batch execution.
STRDBG PGM(PGMNAME) Start debugging a program.
ENDDBG End debugging.
RUNQRY QRY(QRYNAME) Run a query.

5. User Profile & Security Commands :
Command Function
CRTUSRPRF USRPRF(USERNAME) PASSWORD(PASS123) Create a new user profile.
CHGUSRPRF USRPRF(USERNAME) PASSWORD(NEWPASS) Change a user password.
WRKUSRPRF USRPRF(USERNAME) Work with user profiles.
DSPUSRPRF USRPRF(USERNAME) Display user profile details.
GRTOBJAUT OBJ(LIB/PGMNAME) OBJTYPE(*PGM) USER(USERNAME) AUT(*ALL) Grant authority to a user for an object.

6. System Control & Configuration Commands :
Command Function
DSPJOBLOG Display job logs.
DSPMSG Display system messages.
SNDMSG MSG('Hello') TOUSR(USERNAME) Send a message to a user.
CHGSYSVAL SYSVAL(QDATE) VALUE('2024-02-05') Change a system value.
WRKSYSVAL Work with system values.

7. Backup & Recovery Commands :
Command Function
SAVLIB LIB(LIBNAME) DEV(TAP01) Save a library to tape.
RSTLIB LIB(LIBNAME) DEV(TAP01) Restore a library from tape.
SAVOBJ OBJ(OBJNAME) LIB(LIBNAME) DEV(*SAVF) Save a specific object.
RSTOBJ OBJ(OBJNAME) LIB(LIBNAME) DEV(*SAVF) Restore a specific object.

8. Printer & Output Management :
Command Function
WRKOUTQ Work with output queues.
WRKSPLF Work with spooled files.
DLTSPLF FILE(FILENAME) Delete a spooled file.
Difference Between SBMJOB and CALL in CL (Control Language) on IBM i (AS/400) :

SBMJOB and CALL are both Control Language (CL) commands used to execute programs, but they work differently in terms of execution method, job processing, and system resource management.

1. CALL Command :

The CALL command executes a program immediately in the current job (interactive or batch).

Syntax :

CALL PGM(MYPGM) PARM('VALUE1' 'VALUE2')

 

Characteristics of CALL :

* Runs synchronously – The program executes immediately and must finish before the next command runs.
* Uses the current job's resources – Runs in the same job as the calling program.
* Tied to the user's session – If the session ends, the program stops.
* Common in interactive jobs – Typically used for menu-driven applications.

Example Usage :
  • Running a report immediately within a user's session.
  • Executing a program as part of a CL script.

2. SBMJOB Command
:

The SBMJOB command submits a job to run in batch mode, allowing it to execute separately from the current session.

Syntax :

SBMJOB CMD(CALL PGM(MYPGM)) JOB(MYBATCHJOB)

 

Characteristics of SBMJOB :

* Runs asynchronously – The job runs in the background while the user can continue working.
* Uses a batch subsystem – The submitted job runs in a separate batch job queue.
* Independent execution – Even if the user logs off, the job continues running.
* Good for long-running tasks – Ideal for reports, backups, mass data processing, etc.

Example Usage :
  • Submitting a long report so the user can continue working.
  • Running nightly data processing in a batch queue.

3. Key Differences Between CALL and SBMJOB
Feature CALL SBMJOB
Execution Mode Synchronous (waits to finish) Asynchronous (runs in background)
Job Type Runs in the current interactive/batch job Runs in a separate batch job
User Interaction Tied to user session Runs independently
Resource Usage Uses current job’s resources Uses a batch subsystem
Best Used For Immediate program execution Background processing, long tasks
Continues If User Logs Off? No Yes

4. Example Use Case Comparison :

Scenario: A user wants to generate a large report.

  • If they use CALL, they must wait until the report finishes.
  • If they use SBMJOB, they can continue working while the report runs in the background.

In IBM i (formerly AS400), the MONMSG command is a powerful tool used for monitoring and handling messages within Control Language (CL) programs. Its primary purpose is to allow your programs to respond to specific events or errors that occur during their execution.

Here's a breakdown of why MONMSG is essential:

1. Error Handling:

  • Programs often encounter errors, such as file not found, invalid data, or system issues.
  • MONMSG allows you to define how your program should react to these errors.
  • You can specify actions like:
    • Ignoring the error: The program continues execution as if the error didn't occur.
    • Transferring control: The program jumps to a specific label or subroutine to handle the error.
    • Displaying a message: The program informs the user about the error.
    • Ending the program: The program terminates gracefully.

2. Event Monitoring:

  • Besides errors, MONMSG can also monitor for specific events, such as a file being opened or closed, a certain condition being met, or a message arriving in a message queue.
  • This allows your programs to be more dynamic and responsive to different situations.

3. Program Control:

  • By using MONMSG strategically, you can control the flow of your program based on the messages it receives.
  • This can be used to implement complex logic and decision-making within your CL programs.

How it works:

  • MONMSG monitors for specific messages identified by their message IDs (e.g., CPF0001, MCH1211).
  • You can specify the message ID and the action to be taken when that message is received.
  • MONMSG can be placed at the beginning of a program (program-level) to monitor messages throughout the program, or it can be placed after a specific command (command-level) to monitor messages from that command only.

Example :

PGM        /* Program to copy a file */

             CPYF FROMFILE(MYLIB/MYFILE1) TOFILE(MYLIB/MYFILE2)
             MONMSG MSGID(CPF2812) EXEC(GOTO ERROR) /* File not found */

             /* ... more processing ... */

             GOTO END

ERROR:       /* Error handling routine */
             SNDPGMMSG MSG('File not found!') MSGTYPE(*ESCAPE)
             /* ... other error handling actions ... */

END:         ENDPGM


In this example, the MONMSG command monitors for message CPF2812, which indicates that the "from" file was not found. If this message is received, the program jumps to the ERROR label to handle the error.

 

Error handling is a crucial aspect of writing robust CL programs in IBM i. Here's a breakdown of how you can effectively handle errors:

1. Understanding Messages:

  • Message IDs: Errors in IBM i are associated with specific message IDs (e.g., CPF0001, MCH1211). These IDs help identify the type of error that occurred.
  • Message Text: Each message ID has associated text that provides a description of the error.
  • Message Types: Messages have different types, such as:
    • Escape messages: Indicate a serious error that may require program termination.
    • Notify messages: Inform about a condition that might need attention but doesn't necessarily require termination.
    • Status messages: Provide information about the progress of a command.

2. Using the MONMSG command:

  • The MONMSG command is the primary tool for handling messages in CL programs.
  • It allows you to monitor for specific messages and define actions to be taken when those messages are received.
  • You can specify:
    • The message ID to monitor.
    • The action to take (e.g., ignore the message, transfer control to a label, display a message, end the program).

3. Setting up MONMSG:

  • Program-level MONMSG: Placed at the beginning of your program to monitor messages throughout the entire program.
  • Command-level MONMSG: Placed immediately after a specific command to monitor messages generated by that command.

4. Actions in MONMSG:

  • GOTO: Transfers control to a specific label in your program, allowing you to execute error-handling routines.
  • EXEC: Executes a CL command, such as displaying a message or calling another program.
  • RTVMSG: Retrieves the message text for a specific message ID.
  • SNDPGMMSG: Sends a program message to the user or to a message queue.

5. Example :

PGM        /* Program to copy a file */

             CPYF FROMFILE(MYLIB/MYFILE1) TOFILE(MYLIB/MYFILE2)
             MONMSG MSGID(CPF2812) EXEC(GOTO ERROR) /* File not found */

             /* ... more processing ... */

             GOTO END

ERROR:       /* Error handling routine */
             SNDPGMMSG MSG('File not found!') MSGTYPE(*ESCAPE)
             /* ... other error handling actions ... */

END:         ENDPGM

 

In this example :
  • The MONMSG command monitors for message CPF2812 (file not found) after the CPYF command.
  • If the message is received, the program jumps to the ERROR label.
  • The ERROR routine sends an escape message to the user and can perform other error handling actions.

6. Best Practices:

  • Monitor for critical errors: Focus on handling errors that can cause your program to fail or produce incorrect results.
  • Provide informative messages: When an error occurs, provide clear and helpful messages to the user or system administrator.
  • Implement error recovery: If possible, try to recover from errors and allow the program to continue execution.
  • Use a consistent error handling strategy: Develop a standard approach for handling errors in your CL programs to make them easier to maintain and understand.

By effectively using MONMSG and following best practices, you can create robust and reliable CL programs that can handle errors gracefully and ensure the smooth operation of your IBM i system.

Passing parameters between CL (Control Language) and RPG (Report Program Generator) programs in IBM i is a common requirement when you need to integrate these two languages. Here's how you can achieve this:

1. Defining Parameters in RPG:

  • In your RPG program, you need to define the parameters that you expect to receive from the CL program.
  • You do this using the parameter definition specifications (D specs) in your RPG code.
  • You specify the data type, length, and other attributes of each parameter.

Example :

**FREE
ctl-opt dftactgrp(*no) actgrp(*caller);

dcl-s  Name            char(20)   ;
dcl-s  Age             zoned(3)   ;
dcl-s  Salary          packed(7:2);

dcl-pi  *n                
   Name             like(Name)   ;
   Age              like(Age)    ;
   Salary           like(Salary) ;
end-pi;

// ... rest of your RPG program logic ...

 

In this example :

  • Name is a character parameter of length 20.
  • Age is a zoned numeric parameter of length 3.
  • Salary is a packed numeric parameter with 7 digits and 2 decimal places.

2. Calling the RPG Program from CL:

  • In your CL program, you use the CALL command to invoke the RPG program.
  • You pass the parameters to the RPG program using the PARM parameter of the CALL command.
  • The order of the parameters in the PARM list must match the order of the parameters defined in the RPG program.

Example :

PGM        /* CL program to call the RPG program */

             DCL        VAR(&NAME) TYPE(*CHAR) LEN(20) VALUE('John Doe')
             DCL        VAR(&AGE) TYPE(*CHAR) LEN(3) VALUE('30')
             DCL        VAR(&SALARY) TYPE(*CHAR) LEN(9) VALUE('000500.00')

             CALL       MYLIB/MYRPGPGM PARM(&NAME &AGE &SALARY)

             ENDPGM

 

In this example :

  • We declare CL variables &NAME, &AGE, and &SALARY and assign values to them.
  • We then call the RPG program MYLIB/MYRPGPGM and pass these variables as parameters using the PARM parameter.

Important Considerations:

  • Data Type Compatibility: Ensure that the data types of the parameters in the CL program match the data types defined in the RPG program. You might need to perform data type conversions if they are not compatible.
  • Parameter Lengths: Pay close attention to the lengths of the parameters. If the lengths don't match, you might encounter errors or unexpected results.
  • Passing Numeric Values: When passing numeric values, you might need to use the %BIN or %DEC built-in functions in CL to ensure that the values are passed correctly.
  • Passing Character Values: When passing character values, ensure that they are enclosed in single quotes in the PARM list of the CALL command.

Best Practices:

  • Use descriptive parameter names: This makes your code easier to understand and maintain.
  • Validate input parameters: In your RPG program, check the values of the input parameters to ensure that they are valid.
  • Handle errors gracefully: If any errors occur during parameter passing, handle them appropriately in your RPG program.

 

 

Both SNDPGMMSG and SNDUSRMSG are CL commands used to send messages in IBM i, but they serve different purposes and have distinct characteristics:

SNDPGMMSG (Send Program Message)

  • Purpose: Primarily used for communication within a program or between programs in the same call stack. It's often used for:
    • Status updates: Informing the calling program or the job log about the progress of a program.
    • Error handling: Sending escape messages to indicate that a program has encountered an error and needs to terminate.
    • Notifications: Sending notify messages about conditions that might require attention but don't necessarily cause termination.
  • Message Types: Supports a wider range of message types, including:
    • *STATUS
    • *ESCAPE
    • *NOTIFY
    • *COMPL (Completion)
    • *DIAG (Diagnostic)
  • Message Destination: Can send messages to:
    • The external message queue for the job (*EXT)
    • A specific message queue
    • The message queue of the calling program

SNDUSRMSG (Send User Message)

  • Purpose: Designed for communication with a user or system operator. It's typically used for:
    • Inquiry messages: Asking the user for input or confirmation.
    • Informational messages: Displaying information to the user.
  • Message Types: Generally used for:
    • *INQ (Inquiry)
    • *INFO (Informational)
    • *COMPL (Completion)
    • *DIAG (Diagnostic)
  • Message Destination: Can send messages to:
    • A specific user's message queue
    • The system operator's message queue
    • A display station

Key Differences Summarized:

Feature SNDPGMMSG SNDUSRMSG
Primary Use Intra-program/call stack communication User/operator communication
Message Types Wider range (status, escape, notify, etc.) Primarily inquiry and informational
Destination Job's external queue, specific queue, caller's queue User's queue, operator's queue, display station

 

In essence:

  • SNDPGMMSG is for internal program communication and error handling.
  • SNDUSRMSG is for interacting with users or the system operator.

Example:

If you want to display a message to the user asking for confirmation, you would use SNDUSRMSG with MSGTYPE(*INQ). If you want to send a message to the calling program indicating that a subroutine has completed successfully, you would use SNDPGMMSG with MSGTYPE(*COMPL).

 

Job Descriptions (JOBDs) in IBM i (formerly AS400) are essential objects that define the characteristics and environment for running jobs (programs and processes). They act as templates that specify how a job should behave when it's submitted to the system.

Here's how you can create a JOBD:

1. Using the CRTJOBD command:

  • The most common way to create a JOBD is using the CRTJOBD command.
  • You can execute this command from a command line or within a CL program.

Syntax:

CRTJOBD JOBD(library/jobd-name) parameters
  • library: The library where you want to create the JOBD.
  • jobd-name: The name you want to give to the JOBD.
  • parameters: Various parameters that define the JOBD's attributes.

2. Key Parameters:

Here are some of the most important parameters you'll likely use when creating a JOBD:

  • JOBQ: Specifies the job queue where jobs using this JOBD will be submitted.
  • JOBPTY: Sets the priority of jobs using this JOBD.
  • OUTPTY: Sets the priority of spooled output generated by jobs using this JOBD.
  • INLLIBL: Specifies the initial library list for jobs using this JOBD.
  • CURLIB: Specifies the current library for jobs using this JOBD.
  • RTGDTA: Specifies routing data that can be used to route jobs to specific subsystems.
  • TEXT: Provides a description of the JOBD.

3. Example:

CRTJOBD JOBD(MYLIB/MYJOBD) JOBQ(MYLIB/MYJOBQ) JOBPTY(5) OUTPTY(5) INLLIBL(MYLIB, QSYS) TEXT('Job description for batch processing')

This command creates a JOBD named MYJOBD in library MYLIB. Jobs using this JOBD will be submitted to the MYJOBQ job queue, have a job priority of 5, an output priority of 5, an initial library list of MYLIB and QSYS, and a description of "Job description for batch processing."

4. Using WRKJOBD:

  • You can also use the WRKJOBD command to work with job descriptions.
  • This command allows you to display, change, copy, or delete existing JOBDs.
  • You can create a new JOBD by selecting option 1 (Create) on the WRKJOBD display.

5. Important Considerations:

  • Authority: You need sufficient authority to create JOBDs in the specified library.
  • Naming conventions: Follow the naming conventions for IBM i objects when naming your JOBDs.
  • Planning: Carefully plan the attributes of your JOBDs based on the requirements of the jobs that will use them.

By creating and configuring JOBDs effectively, you can:

  • Control the execution environment of your jobs.
  • Manage system resources efficiently.
  • Improve the overall performance of your IBM i system.

Let's break down the use of OVRDBF, OPNQRYF, and DSPRCDLCK in IBM i (formerly AS400), explaining their purposes and how they're typically employed.


OVRDBF (Override with Database File) :

  • Purpose: OVRDBF is used to temporarily change the attributes of a physical file used by a program. This allows you to redirect a program to use a different file, a different member within a file, or modify certain file access characteristics.
  • Common Uses:
    • Testing: You might override a production file with a test file to run programs without affecting live data.
    • Data manipulation: You could override a file to point to a different member containing specific data for a particular process.
    • Dynamic file selection: In some cases, you might determine the file to be used at runtime and use OVRDBF to set the correct file.
    • Sharing access paths: When used with SHARE(*YES), it allows other programs (like RPG) to use the same open data path, which is often necessary when working with OPNQRYF.
  • Key Parameters:
    • FILE: The original file name to be overridden.
    • TOFILE: The file to be used instead of the original.
    • MBR: The member to be used (if overriding with a different member).
    • POSITION: Allows positioning the file pointer at a specific record (e.g., *START, *END, *RRN nnn).
    • SHARE: Specifies whether the open data path can be shared with other programs.
  • Example :
  • OVRDBF FILE(CUSTFILE) TOFILE(TESTLIB/CUSTFILE) MBR(TESTDATA)
  • This command overrides CUSTFILE to use the CUSTFILE in TESTLIB and specifically the TESTDATA member.


OPNQRYF (Open Query File) :

  • Purpose: OPNQRYF is used to create a dynamic view or subset of data from one or more physical files. It's a way to perform selection, joining, sorting, and calculations on data before it's accessed by a program.
  • Common Uses:
    • Data filtering: Selecting only records that meet specific criteria.
    • Joining files: Combining data from multiple files based on related fields.
    • Sorting: Arranging records in a specific order.
    • Calculations: Creating new fields based on existing data.
  • Key Parameters:
    • FILE: The name of the file(s) to be used.
    • QRYSLT: The selection criteria to filter records (e.g., WHERE CUSTNO = 12345).
    • JOIN: Specifies how to join multiple files.
    • SORT: Specifies the fields to sort by.
    • OPNID: An identifier for the open query file, used when sharing access paths.
  • Example :
  • OPNQRYF FILE(CUSTFILE ORDERFILE) QRYSLT('CUSTNO = 12345') JOIN(CUSTNO) SORT(ORDERDATE) OPNID(MYQUERY)
  •  
  • This command opens a query file that joins CUSTFILE and ORDERFILE based on CUSTNO, selects records where CUSTNO is 12345, sorts the results by ORDERDATE, and assigns the identifier MYQUERY.
  • Note: OPNQRYF is often used in conjunction with OVRDBF with SHARE(*YES) to make the resulting data available to RPG or other programs.


DSPRCDLCK (Display Record Locks) :

  • Purpose: DSPRCDLCK is used to display the record locks held by jobs on a specific file. This is crucial for understanding and resolving lock contention issues that can prevent programs from accessing data.
  • Common Uses:
    • Troubleshooting: When a program is waiting for a file that's locked, DSPRCDLCK helps identify which job holds the lock.
    • Performance analysis: Identifying frequently locked files can help pinpoint performance bottlenecks.
    • System monitoring: Regularly checking for record locks can help prevent deadlocks and other data access issues.
  • Key Parameters:
    • FILE: The file for which to display record locks.
    • JOB: Optionally specify a specific job to see locks held by that job.
  • Example :
  • DSPRCDLCK FILE(CUSTFILE)
  •  
  • This command displays all record locks held on CUSTFILE.

In summary:

  • OVRDBF is for overriding file attributes, often used for testing, data manipulation, and dynamic file selection.
  • OPNQRYF is for creating dynamic views of data, allowing selection, joining, sorting, and calculations.
  • DSPRCDLCK is for displaying record locks, used for troubleshooting and performance analysis.

These commands are essential tools for managing and working with data in IBM i environments, enabling you to control file access, manipulate data effectively, and diagnose data contention problems.

Debugging CL programs in AS400 (IBM i) involves a combination of techniques and tools. Here's a breakdown of the process:

1. Preparation:

  • Compile with Debug Information: Ensure your CL program is compiled with debug information. This is crucial for the debugger to access the source code and allow you to set breakpoints, step through the code, and inspect variables. You can achieve this by using the DBGVIEW(*SOURCE) or DBGVIEW(*ALL) parameter on the CRTCLPGM or CRTBNDCL command.
  • Source Member: Make sure the source member for your CL program is available and has not been modified since the program was compiled.

2. Starting the Debugger:

  • STRDBG Command: Use the STRDBG command to start the debugger. Specify the program you want to debug:
STRDBG PGM(library/program-name)
  • OPMSRC Parameter (for OPM Programs): If you are debugging an Original Program Model (OPM) CL program, you might need to specify OPMSRC(*YES) on the STRDBG command.
  • UPDPROD Parameter: The UPDPROD parameter controls whether you can make changes to production files during debugging. It's generally recommended to keep this set to *NO to avoid unintentional modifications to live data.

3. Debugging Commands:

Once the debugger starts, you'll be presented with a display showing your CL program's source code. You can use various debug commands to control the debugging session:

  • Breakpoints:
    • BREAK: Set breakpoints at specific lines in your code to pause execution.
    • CLEAR: Remove breakpoints.
    • F6 (Add/Clear Breakpoint): Use this function key to toggle breakpoints on the displayed source code.
  • Stepping:
    • STEP or F10: Execute the next statement.
    • STEP INTO or F22: Step into a called program or subroutine.
  • Inspecting Variables:
    • EVAL: Display or change the value of a variable.
    • ATTR: Display the attributes (type, length) of a variable.
  • Displaying Source:
    • DISPLAY: Display a different source module.
    • FIND: Search for a string or line number in the source code.
  • Navigation:
    • UP, DOWN, LEFT, RIGHT: Scroll through the source code.
    • TOP, BOTTOM: Go to the beginning or end of the source code.
  • Other Commands:
    • HELP: Display help information about debug commands.
    • SET: Change debugging options.
    • WATCH: Monitor the value of a variable or expression.

4. Debugging Process:

  1. Set Breakpoints: Set breakpoints at strategic locations in your CL program where you want to pause execution and examine the program's state.
  2. Run the Program: Call your CL program. Execution will pause at the first breakpoint you've set.
  3. Step Through Code: Use the stepping commands (STEP, STEP INTO) to execute your CL program line by line. This allows you to observe the program's flow and identify any logical errors.
  4. Inspect Variables: Use the EVAL command to examine the values of variables at different points in your program. This helps you understand how data is being processed and identify any incorrect values.
  5. Evaluate Expressions: You can use EVAL to evaluate expressions and see the results. This can be useful for debugging complex logic.
  6. Identify and Fix Errors: When you encounter an error, analyze the program's state, the values of variables, and the flow of execution to understand the cause of the error. Modify your CL program code to correct the error.
  7. Test and Repeat: After fixing an error, recompile your CL program and repeat the debugging process to ensure that the error is resolved and no new errors have been introduced.

5. Ending the Debugger:

  • ENDDBG Command: Use the ENDDBG command to end the debugging session.

Tips for Effective Debugging:

  • Understand your program: Before you start debugging, make sure you have a good understanding of how your CL program is supposed to work.
  • Use meaningful variable names: This makes it easier to understand the purpose of variables when inspecting them during debugging.
  • Break down complex logic: If you have complex logic in your CL program, break it down into smaller, more manageable chunks. This makes it easier to isolate and debug errors.
  • Use comments: Add comments to your CL program to explain the purpose of different sections of code. This can help you understand the program's logic and identify potential errors.
  • Test thoroughly: After you have fixed an error, test your CL program thoroughly to ensure that it works correctly in all situations.

Both RTVJOBA and RTVMSG are CL commands in IBM i (formerly AS400) used to retrieve information, but they target different types of information:

1. RTVJOBA (Retrieve Job Attributes)

  • Purpose: RTVJOBA is used to retrieve various attributes of a job currently running on the system. This information can then be stored in CL variables and used to control the flow or behavior of your CL program.
  • Common Uses:
    • Determining job type: You can retrieve the job type (interactive or batch) and perform different actions accordingly.
    • Getting user information: You can retrieve the name of the user who submitted the job.
    • Checking job status: You can check if a job is active, waiting in a job queue, or has completed.
    • Retrieving output queue information: You can get the name of the output queue associated with the job.
    • Accessing system values: You can indirectly access system values that are related to the job.
  • Key Parameters:
    • JOB: Specifies the job for which to retrieve attributes (defaults to the current job).
    • USER: Specifies the user of the job.
    • NBR: Specifies the job number.
    • TYPE: Specifies the type of job (e.g., *INT for interactive, *BAT for batch).
    • STATUS: Specifies the status of the job (e.g., *ACTIVE, *JOBQ, *OUTQ).
    • OUTQ: Specifies the output queue.
    • INLLIBL: Specifies the initial library list.
    • CURLIB: Specifies the current library.
    • RTGDTA: Specifies the routing data.
  • Example :
  • RTVJOBA USER(&USER) TYPE(&JOBTYPE)
    IF (&JOBTYPE *EQ '*INT') THEN(DO)
        /* Perform interactive processing */
    ENDDO
    ELSE(DO)
        /* Perform batch processing */
    ENDDO
  • This example retrieves the user and job type and then performs different actions based on whether the job is interactive or batch.


2. RTVMSG (Retrieve Message)

  • Purpose: RTVMSG is used to retrieve the text of a message from a message file. This allows you to dynamically display messages to users or use message text in your programs.
  • Common Uses:
    • Displaying messages: You can retrieve the text of a predefined message and display it to the user using SNDUSRMSG.
    • Error handling: You can retrieve the text of an error message and include it in your error handling routines.
    • Internationalization: You can store messages in different message files for different languages and retrieve the appropriate message based on the user's locale.
  • Key Parameters:
    • MSGID: Specifies the message ID of the message to retrieve.
    • MSGF: Specifies the message file containing the message.
    • MSG: Specifies the CL variable where the message text will be stored.
    • MSGDTA: Specifies the message data to be substituted into the message text.
  • Example :
  • RTVMSG MSGID(CPF2812) MSGF(QCPFMSG) MSG(&MSGTEXT)
    SNDPGMMSG MSG(&MSGTEXT) MSGTYPE(*ESCAPE)
  •  
  • This example retrieves the text of message CPF2812 (File not found) from the QCPFMSG message file and stores it in the &MSGTEXT variable. It then sends an escape message containing this text.

Key Differences Summarized :

Feature RTVJOBA RTVMSG
Purpose Retrieve job attributes Retrieve message text
Target Job characteristics Message from a message file
Use Cases Job control, user information, status checks Displaying messages, error handling, internationalization

 

In essence:

  • RTVJOBA provides information about a job.
  • RTVMSG provides information from a message file.

Both commands are valuable tools in CL programming, enabling you to create more dynamic and responsive programs by accessing job attributes and retrieving message text.

 

RPG (Report Program Generator) has evolved significantly since its inception, leading to different versions and programming styles. Here's a breakdown of the main types of RPG programming you might encounter:

1. Historical RPG Versions:

  • RPG II: The earliest version, primarily used for report generation on older IBM systems. It had a very structured, column-oriented syntax.
  • RPG III: Introduced on the System/38, it brought significant advancements like structured programming constructs (IF-THEN-ELSE, DO loops), subroutines, and improved data handling.

2. Modern RPG (RPG IV/ILE RPG):

  • RPG IV (also known as ILE RPG): This is the most current and widely used version of RPG. It's characterized by:
    • Integrated Language Environment (ILE): Allows RPG programs to seamlessly interact with other ILE languages like COBOL and C, enabling code sharing and modular development.
    • Free-form syntax: Offers a more modern, less rigid syntax compared to earlier versions, making code easier to read and write.
    • Enhanced features: Includes support for modern programming concepts like data structures, subprocedures, and improved file handling.

3. RPG Programming Styles:

Within RPG IV, you'll find two distinct coding styles:

  • Traditional (Column-oriented): This style uses a fixed format with specific columns for different parts of the code. It's often seen in older RPG programs.
  • Free-form: This style provides a more flexible syntax, allowing you to write code in a less restrictive format. It's the preferred style for new RPG development due to its readability and maintainability.

Key Differences Summarized:

Feature RPG II/III RPG IV/ILE RPG
Syntax Column-oriented Free-form (and column-oriented for compatibility)
Structure Less structured More structured, with support for modern constructs
Integration Limited Integrated with other ILE languages
Features Basic report generation and data manipulation Extensive features for modern application development

 

Which type should you use?

  • For new development, RPG IV (ILE RPG) with free-form syntax is the recommended approach. It provides the most modern features, flexibility, and integration capabilities.
  • You might encounter older RPG II/III programs if you're working with legacy systems. Understanding these versions can be helpful for maintenance or migration purposes.

Important Notes:

  • While RPG has been traditionally associated with IBM midrange systems (AS400, IBM i), it's worth noting that there have been implementations of RPG for other platforms over the years.
  • RPG continues to evolve with ongoing updates and enhancements, ensuring its relevance in the IBM i ecosystem.

By understanding the different types of RPG programming, you can choose the most appropriate approach for your development needs and effectively work with RPG code in various IBM i environments.

You're asking about the evolution of RPG, and it's a good question because the distinctions can be a bit confusing! Here's a breakdown of RPG III, RPG IV, and RPG Free Format:

RPG III

  • The Bridge: RPG III was a significant step up from the very early RPG versions. It introduced structured programming concepts (like IF-THEN-ELSE and DO loops), subroutines, and better data handling. This made RPG more powerful and flexible.
  • Still Column-Bound: However, RPG III still relied on a rigid, column-oriented syntax. You had to write code in specific columns on the screen, which could be cumbersome.
  • Legacy: You'll likely encounter RPG III programs if you work with older IBM i (AS400) systems. It's important to understand it for maintenance, but it's not used for new development.

RPG IV

  • Modernization: RPG IV (also known as ILE RPG) is the current, widely used version. It brought major changes:
    • Integrated Language Environment (ILE): This is a big deal! It allows RPG programs to work seamlessly with other ILE languages like COBOL and C. This enables code sharing and modular development.
    • Free-form Syntax: RPG IV introduced the option of free-form syntax, which we'll talk about more in a moment.
    • Enhanced Features: RPG IV has grown over the years with lots of improvements: data structures, subprocedures, better file handling, and more.

RPG Free Format

  • A Style within RPG IV: This is where the confusion often comes in. RPG Free Format is not a separate version of RPG. It's a coding style within RPG IV.
  • Flexibility: Free-form syntax lets you write RPG code in a more modern, less restrictive way. You're not tied to specific columns anymore. It's much more like languages you might be familiar with (C#, Python, etc.).
  • The Standard: For new RPG IV development, free-form is the preferred style. It makes code easier to read, write, and maintain.

Here's a table to summarize:

Feature RPG III RPG IV RPG Free Format
Version Older version Current version Coding style within RPG IV
Syntax Column-oriented Can be column-oriented or free-form Free-form only
ILE No Yes Yes (because it's RPG IV)
Use for new development No Yes Yes (recommended style)
22 .
Explain the use of %SCAN, %CHECK, and %SUBST in RPG.

%SCAN, %CHECK, and %SUBST are built-in functions (BIFs) in RPG IV (ILE RPG) that provide powerful string manipulation capabilities. Let's explore each one:

%SCAN (Scan for Substring) :

  • Purpose: %SCAN searches for the first occurrence of a substring within a string. It returns the starting position of the substring if found, or 0 if it's not found.
  • Syntax: %SCAN(substring : string : start)
    • substring: The substring you're looking for.
    • string: The string you're searching within.
    • start: (Optional) The starting position for the search. If omitted, the search starts at the beginning of the string.
  • Example :
  • DCL-S  String      VARCHAR(50)  INZ('This is a test string.');
    DCL-S  Substring   VARCHAR(10)  INZ('test');
    DCL-I  Position      INT(10);
    
    Position = %SCAN(Substring : String); // Position will be 11
    
    IF Position > 0;
      // Substring found at position 'Position'
      DSPLY ('Substring found at position ' + %CHAR(Position));
    ELSE;
      // Substring not found
      DSPLY ('Substring not found.');
    ENDIF;
    
    Position = %SCAN('is' : String : 4); // Position will be 6 (finds the second "is")

%CHECK (Check Characters) :

  • Purpose: %CHECK verifies if a string contains only characters from a specified set. It returns the position of the first character that is not in the set, or 0 if all characters are in the set. %CHECKR (Check Reverse) does the opposite, finding the first character from the right that is not in the set.
  • Syntax: %CHECK(characters : string) or %CHECKR(characters : string)
    • characters: The set of valid characters.
    • string: The string to be checked.
  • Example :
  • DCL-S  String      VARCHAR(20)  INZ('ABC123XYZ');
    DCL-S  ValidChars  VARCHAR(10)  INZ('ABCDEFGHIJKLMNOPQRSTUVWXYZ');
    DCL-I  InvalidPos  INT(10);
    
    InvalidPos = %CHECK(ValidChars : String); // InvalidPos will be 4 (the '1')
    
    IF InvalidPos > 0;
      DSPLY ('Invalid character found at position ' + %CHAR(InvalidPos));
    ELSE;
      DSPLY ('All characters are valid.');
    ENDIF;
    
    InvalidPos = %CHECKR(ValidChars : String); // InvalidPos will be 7 (the '1')

%SUBST (Substring) :

  • Purpose: %SUBST extracts a portion of a string.
  • Syntax: %SUBST(string : start : length)
    • string: The string from which to extract the substring.
    • start: The starting position of the substring.
    • length: The length of the substring to extract. If omitted, it extracts to the end of the string.
  • Example :
  • DCL-S  String      VARCHAR(30)  INZ('This is a test string.');
    DCL-S  Substring   VARCHAR(10);
    
    Substring = %SUBST(String : 9 : 4); // Substring will be 'is a'
    
    Substring = %SUBST(String : 12); // Substring will be 'test string.' (from position 12 to end)

Key Differences and Use Cases :

  • %SCAN: Use it when you need to find the location of a specific substring within a larger string. Useful for parsing data, searching text, etc.
  • %CHECK / %CHECKR: Use them to validate data, ensuring that a string contains only allowed characters. Useful for data entry validation, cleaning data, etc.
  • %SUBST: Use it when you need to extract a specific portion of a string. Useful for manipulating data, extracting parts of a string, etc.

These BIFs are frequently used in RPG programs for tasks like :

  • Data validation: Checking if input data is in the correct format.
  • String parsing: Extracting information from strings.
  • Data manipulation: Modifying and transforming strings.
  • Report formatting: Preparing data for output in reports.

Handling dates and times in RPGLE involves using specific data types, built-in functions (BIFs), and understanding the various formats available. Here's a comprehensive overview:

1. Date and Time Data Types:

  • Date: D data type. Stores a date in a specific format (e.g., YYYY-MM-DD, MM/DD/YY).
  • Time: T data type. Stores a time in a specific format (e.g., HH.MM.SS, HH:MM:SS).
  • Timestamp: Z data type. Stores both date and time.


2. Defining Date/Time Variables :

DCL-S  MyDate       D   INZ(2024-03-15);  // Initialized to March 15, 2024
DCL-S  MyTime       T   INZ(14.30.00);  // Initialized to 2:30 PM
DCL-S  MyTimestamp  Z   INZ(2024-03-15-14.30.00); // Date and Time

 

3. Date/Time Formats:

  • System-defined formats: The default format is usually *ISO (YYYY-MM-DD) for dates and *HMS (HH.MM.SS) for times, but this can be changed using system values.
  • User-defined formats: You can specify the format when defining date/time variables or using BIFs. Common formats include:
    • Dates: *MDY (MM/DD/YY), *DMY (DD/MM/YY), *YMD (YYYYMMDD), *JUL (Julian), etc.
    • Times: *HMS (HH.MM.SS), *ISO (HH:MM:SS), etc.
  • Format codes: You use format codes within BIFs to convert between different date/time formats. Examples: %date(*YMD) converts a date to YYYYMMDD format.


4. Built-in Functions (BIFs):

  • %DATE: Converts a character or numeric value to a date.
  • %TIME: Converts a character or numeric value to a time.
  • %TIMESTAMP: Converts a character or numeric value to a timestamp.
  • %CHAR: Converts a date, time, or timestamp to a character value. Crucial for displaying or using date/time values in character strings.
  • %DAYS, %MONTHS, %YEARS: Adds or subtracts days, months, or years from a date.
  • %DIFF: Calculates the difference between two dates or times.
  • %SUBDT: Extracts a portion of a date or time (e.g., the month from a date).
  • %SCAN: Searches for date/time components within a string.


5. Date/Time Arithmetic:

You can perform arithmetic operations on dates and times :

MyDate = MyDate + %DAYS(7);   // Add 7 days
MyDate = MyDate - %MONTHS(1); // Subtract 1 month
MyTimestamp = MyTimestamp + %MINUTES(30); // Add 30 minutes


6. Converting Between Formats :

// Convert date to character in a specific format
MyCharDate = %CHAR(MyDate : *MDY); // MyCharDate will be in MM/DD/YY format

// Convert character to date
MyDate = %DATE('12/25/2024' : *MDY);

// Get the current date in YYYYMMDD format
CurrentDate = %CHAR(%DATE() : *YMD);


7. Working with Timestamps :

Timestamps are particularly useful for tracking events and recording when something occurred.

// Get the current timestamp
Now = %TIMESTAMP();


8. Handling Date/Time Errors:

  • Data type mismatch: If you try to move a value of the wrong data type into a date/time field, you'll get an error.
  • Invalid date/time values: Make sure the date/time values you're working with are valid (e.g., no February 30th).
  • Format errors: When converting between character and date/time values, ensure the formats match.

Example :

DCL-S  OrderDate      D   INZ(2024-03-01);
DCL-S  ShipDate       D;
DCL-S  DaysToShip     INT(10);
DCL-C  DueDate        D   INZ(2024-12-31); // Example Due Date

ShipDate = OrderDate + %DAYS(10);  // 10 days after order

DaysToShip = %DIFF(ShipDate : OrderDate : *D); // Calculate days between dates

IF ShipDate > DueDate;
  DSPLY ('Order is late!');
ENDIF;

DSPLY ('Order Date: ' + %CHAR(OrderDate : *MDY));
DSPLY ('Ship Date: ' + %CHAR(ShipDate : *MDY));


By understanding these concepts and using the appropriate BIFs, you can effectively handle date and time operations in your RPGLE programs. Remember to pay close attention to data types and formats to avoid errors.

 

 

Data structures in RPG (specifically ILE RPG or RPG IV) are powerful tools for grouping related data items together. They provide a way to organize data, improve code readability, and simplify data manipulation. Here's a comprehensive guide to using data structures in RPG:

1. Defining Data Structures :

You define a data structure using the DS keyword in the Definition specifications (D specs).

Dcl-s  MyDataDS  Ds; // Basic data structure


2. Subfields :

Within a data structure, you define individual data items called subfields. You specify the name, data type, and length of each subfield.

Dcl-s  MyDataDS  Ds;
Dcl-s  Name          Char(20);
Dcl-s  Age           Int(3);
Dcl-s  Salary        Packed(7:2);


3. Qualified Data Structures :

For better code organization and to avoid naming conflicts, it's highly recommended to use qualified data structures. This means you access subfields using the data structure name as a qualifier.

Dcl-s  MyDataDS  Ds  Qualified;  // Note the Qualified keyword

MyDataDS.Name = 'John Doe';
MyDataDS.Age = 30;
MyDataDS.Salary = 50000.00;


4. Like-named Subfields :

You can define subfields based on the definition of existing fields using the LIKE keyword. This helps maintain consistency and reduces redundancy.

Dcl-s  CustomerName  Char(20); // Existing field

Dcl-s  CustInfoDS  Ds  Qualified;
Dcl-s  Name          Like(CustomerName); // Same type and length as CustomerName
Dcl-s  CustNo        Int(10);

 

5. Data Structure Types :

  • Simple Data Structures: These are the basic type, as shown in the examples above.
  • Program-described Data Structures: These are used to define data structures based on the layout of a file or record format. You use the EXTRCD (Extract Record) or INZ(*LIKEFILE) keywords.
  • Externally-described Data Structures: These are defined based on an external definition, such as a database table or a C structure. You use the EXTNAME keyword.


6. Using Data Structures :
You access subfields of a qualified data structure using the dot notation: DataStructureName.SubfieldName.

Dsply (MyDataDS.Name);
If MyDataDS.Age >= 18;
  // ...
Endif;

7. Initializing Data Structures:

* INZ: You can initialize subfields when defining the data structure.

Dcl-s  ProductDS  Ds  Qualified  Inz;
Dcl-s  ProdCode     Char(10)  Inz('ABC-123');
Dcl-s  Price        Packed(9:2) Inz(0);?

* Clear: You can clear a data structure or its subfields using the CLEAR operation.

Clear MyDataDS; // Clears all subfields
Clear MyDataDS.Name; // Clears only the Name subfield


8. Data Structure Arrays:

You can define arrays of data structures, which is very useful for working with lists of related data.

Dcl-s  EmployeeDS  Ds  Qualified  Dim(100); // Array of 100 Employee data structures
EmployeeDS(1).Name = 'First Employee';
EmployeeDS(2).Name = 'Second Employee';
// ...


9. Examples :

  • Working with Files :
  • Dcl-f  CustomerFile  E           K Disk    Prefix('Cust_'); // Prefix for record format
    
    Dcl-s  CustRec  Ds  Qualified  Inz(*LikeRec(CustomerFile)); // Program-described DS
    
    Read CustomerFile;
    If %eof(CustomerFile);
      // ...
    Else;
      Dsply (CustRec.CustName); // Accessing data from the file using the DS
    Endif;
  • Passing Parameters :
  • // In the calling program
    Dcl-s  MyCustInfo  Like(CustInfoDS);
    Call MySubroutine Parm(MyCustInfo);
    
    // In the called subroutine
    Dcl-s  MySubroutine  Pr  ExtProc('MYFUNC');
    Dcl-s  CustData      Like(CustInfoDS); // Receiving the DS as a parameter

     

Benefits of using Data Structures :

  • Organization: Group related data items together, making your code easier to understand.
  • Readability: Using qualified names makes it clear which data belongs to which structure.
  • Maintainability: Changes to data definitions are easier to manage.
  • Simplified data manipulation: You can move or clear entire data structures at once.
  • Parameter passing: Passing data structures as parameters simplifies subroutine calls.

SQL on AS400 (now IBM i) is a powerful way to interact with your data. It's deeply integrated into the system and offers a lot of flexibility. Here's how it works:

1. DB2 for i :

  • Integrated Database: The database system on IBM i is called DB2 for i. It's a core part of the operating system, not a separate add-on. This tight integration means that SQL is deeply woven into how you work with data on the system.

2. Accessing Data :

  • Tables: In SQL, data is organized in tables, which are similar to files in the traditional AS400 file system.
  • SQL Statements: You use SQL statements to perform actions on these tables:
    • SELECT: Retrieve data
    • INSERT: Add new data
    • UPDATE: Modify existing data
    • DELETE: Remove data
    • CREATE: Define tables and other database objects
    • DROP: Delete tables and other database objects

3. Running SQL :

You can run SQL in several ways on IBM i:

  • Interactive SQL:
    • STRSQL: This command starts an interactive SQL session where you can type and execute SQL statements directly. It's great for quick queries and testing.
    • Query Manager: This tool provides a user-friendly interface for creating, storing, and running SQL queries, even if you're not an SQL expert.
  • Embedded SQL:
    • Within Programs: You can embed SQL statements directly into programs written in languages like RPG, COBOL, or C. This lets your programs interact with the database seamlessly.
    • Static SQL: The SQL statements are fixed within the program.
    • Dynamic SQL: The SQL statements can be built or modified at runtime, providing more flexibility.
  • ODBC/JDBC:
    • External Access: You can connect to the DB2 for i database from external applications using standard interfaces like ODBC (Open Database Connectivity) and JDBC (Java Database Connectivity). This allows tools like Microsoft Excel or Java applications to access and manipulate data on your IBM i.

4. Key Features :

  • Powerful Querying: SQL provides a rich set of features for querying data, including:
    • WHERE clause for filtering data
    • JOIN operations for combining data from multiple tables
    • GROUP BY and aggregate functions for summarizing data
    • ORDER BY for sorting data
  • Data Integrity: DB2 for i supports features like:
    • Transactions to ensure data consistency
    • Constraints to enforce rules about the data
    • Security features to control access to data
  • Performance: DB2 for i is designed for performance, with features like:
    • Query optimization to find the most efficient way to execute queries
    • Indexing to speed up data retrieval

5. Working with AS400 Objects :

  • Seamless Integration: SQL works seamlessly with traditional AS400 objects like physical files and logical files. You can use SQL to query and manipulate data in these files.
  • Views: You can create views using SQL to define logical subsets of data from one or more tables. This simplifies complex queries and provides a way to present data in a specific way.

You're asking about two fundamental ways of defining and working with data on IBM i (formerly AS400). Here's a breakdown of the key differences between DDS and SQL tables:

DDS (Data Description Specifications) :

  • Traditional Approach: DDS is the older, more traditional method for defining files (tables) on IBM i. It's been around for a long time and is deeply ingrained in the system's architecture.
  • File-based: DDS focuses on describing the physical layout of files, including record formats, field names, data types, and key fields.
  • Record-oriented: When you work with DDS-defined files in RPG or other languages, you typically process data one record at a time.
  • Data Validation at Read Time: Data validation (checking if data meets the defined rules) typically occurs when data is read from a DDS file.
  • Limited Functionality: DDS has some limitations in terms of data manipulation and querying compared to SQL.

SQL Tables :

  • Modern Approach: SQL (Structured Query Language) is a more modern and standardized way of defining and working with data. It's widely used across different database systems.
  • Table-based: SQL focuses on defining tables with columns, data types, and constraints.
  • Set-oriented: SQL allows you to work with sets of data using powerful queries. You can retrieve, insert, update, or delete multiple rows at once.
  • Data Validation at Write Time: Data validation in SQL typically happens when data is written to a table. This ensures data integrity from the start.
  • Rich Functionality: SQL provides a wide range of features for data manipulation, querying, and management, including joins, subqueries, aggregate functions, and more.

Key Differences Summarized :

Feature DDS SQL Tables
Approach Traditional, file-based Modern, table-based
Data Validation Read time Write time
Data Access Record-oriented Set-oriented
Functionality More limited Rich and extensive
Standardization IBM i specific Industry standard

Which to use?

  • New Development: For new applications, SQL tables are generally recommended. They offer better functionality, performance, and standardization.
  • Legacy Systems: You'll likely encounter DDS-defined files when working with older IBM i applications. It's important to understand DDS for maintenance or migration purposes.
  • Coexistence: You can use both DDS and SQL together. For example, you might have some data in DDS files and other data in SQL tables. You can even use SQL to query data in DDS files.

Important Considerations:

  • Performance: SQL can often provide better performance, especially for complex queries, due to its query optimizer.
  • Data Integrity: SQL's data validation at write time can help ensure data consistency.
  • Developer Skills: SQL is a widely known and taught language, so it might be easier to find developers with SQL skills.

You create an SQL index in DB2 for i (on IBM i) using the CREATE INDEX statement. Here's a breakdown of the syntax and options:

Basic Syntax :

CREATE INDEX index-name
ON table-name (column1, column2, ...);
  • index-name: The name you want to give to your index. It's good practice to use a naming convention that makes it clear what the index is for.
  • table-name: The name of the table on which you're creating the index.
  • column1, column2, ...: The columns that you want to include in the index. You can include one or more columns. The order of the columns is important, as it affects how the index is used.

 

Example :
CREATE INDEX CustNameIdx
ON CUSTOMERS (LastName, FirstName);

This creates an index named CustNameIdx on the CUSTOMERS table, using the LastName and FirstName columns. Queries that filter by LastName and then FirstName will be able to use this index efficiently.

Key Options and Considerations:

* ASC/DESC : You can specify whether you want the index to be in ascending (ASC) or descending (DESC) order for each column. Ascending is the default.
CREATE INDEX OrderDateIdx
ON ORDERS (OrderDate DESC);  -- Index in descending order of OrderDate?

* UNIQUE : You can create a unique index, which enforces that the combination of values in the indexed columns is unique across all rows in the table. This is often used for primary keys or other unique constraints.

CREATE UNIQUE INDEX CustNoIdx
ON CUSTOMERS (CustomerID);?
  • Partitioned Tables: If your table is partitioned, you might need to include partitioning columns in the index or specify how the index is partitioned.

  • Expression-based Indexes: You can create indexes on expressions involving columns, not just on the columns themselves. This can be useful for optimizing queries that filter or sort by calculated values.

    CREATE INDEX UpperNameIdx
    ON CUSTOMERS (UPPER(LastName));
  • Index Type: While DB2 for i typically uses B-tree indexes, there are other index types (like Encoded Vector Indexes) that might be appropriate for specific workloads or data characteristics. The default is usually a B-tree index.

  • Journaling: Indexes can be journaled, which is important for recovery in case of system failures. The journaling attributes of an index are often inherited from the table.

  • File vs. Index: It's important to distinguish between a file (which stores the actual data) and an index (which is a separate structure that helps speed up data access). An index is like the index in a book; it doesn't contain the data itself, but it tells you where to find it.


Best Practices:

  • Index selectively: Don't index every column. Indexes add overhead to data modification operations (inserts, updates, deletes). Only index the columns that are frequently used in WHERE clauses, JOIN conditions, ORDER BY clauses, or GROUP BY clauses.
  • Consider the order of columns: The order of columns in the index matters. Put the most frequently used columns first.
  • Test and monitor: After creating indexes, test your queries to make sure they're using the indexes effectively. You can use tools like Visual Explain to analyze query performance. Monitor your indexes over time to ensure they're still providing benefit.
  • Unique indexes for constraints: Use unique indexes to enforce primary key and unique constraints. This is often more efficient than using separate constraints.
28 .
What is the difference between FETCH FIRST 1 ROWS ONLY and LIMIT 1?

Both FETCH FIRST 1 ROWS ONLY and LIMIT 1 serve the same basic purpose: restricting the result set of a query to a single row. However, there are subtle differences, primarily related to SQL standards and database system support.

FETCH FIRST 1 ROWS ONLY

  • SQL Standard: This syntax is part of the SQL standard (specifically, it's defined in SQL:2008 and later).
  • Portability: It's generally considered more portable across different database systems that support the standard. If you're working with multiple database platforms, FETCH FIRST is often the preferred choice.
  • Expressiveness: You can extend FETCH FIRST to retrieve the first n rows (e.g., FETCH FIRST 10 ROWS ONLY). It's more flexible in this regard. You can also include an OFFSET clause to skip a certain number of rows before fetching the first n.

LIMIT 1

  • Database-Specific: The LIMIT clause is not part of the SQL standard. It's syntax that originated in PostgreSQL and has been adopted by some other database systems (like MySQL).
  • Less Portable: It's not universally supported. If you use LIMIT, your SQL might not work on all database platforms.
  • Simpler for Single Row: For simply retrieving one row, LIMIT 1 is often a bit shorter and easier to write than FETCH FIRST 1 ROWS ONLY.

Key Differences Summarized:

Feature FETCH FIRST 1 ROWS ONLY LIMIT 1
SQL Standard Standard (SQL:2008+) Non-standard
Portability More portable Less portable
Expressiveness Supports fetching multiple rows (FETCH FIRST n), OFFSET Typically only for single row (or a small, fixed number)
Syntax FETCH FIRST n ROWS ONLY [OFFSET m] LIMIT n

 

Which to use?

  • Portability: If you need your SQL to work across different database systems, use FETCH FIRST 1 ROWS ONLY. This is the most important consideration.
  • Simplicity (Single Row): If you're only ever retrieving a single row and portability isn't a concern, LIMIT 1 is slightly simpler.
  • Multiple Rows/Offset: If you need to retrieve the first n rows or use an offset, FETCH FIRST is the only option.

Example:

Both of these queries will return the first row from the CUSTOMERS table :

SELECT * FROM CUSTOMERS FETCH FIRST 1 ROWS ONLY;

SELECT * FROM CUSTOMERS LIMIT 1;

However, if you wanted the first 10 customers, you'd have to use FETCH FIRST :

SELECT * FROM CUSTOMERS FETCH FIRST 10 ROWS ONLY;


You can't do this with LIMIT in a standard way. Some databases do have a LIMIT n OFFSET m syntax, but it's not standard.

In short: For most cases, especially when portability is a factor, FETCH FIRST 1 ROWS ONLY is the better and more standard choice. LIMIT 1 is acceptable for quick, non-portable queries when you're absolutely sure you only want one row.

 

Embedded SQL in RPG (specifically ILE RPG) allows you to seamlessly integrate SQL statements directly within your RPG programs. This enables your programs to interact with the database for tasks like retrieving, inserting, updating, and deleting data. Here's a comprehensive guide:

1. Setting up your RPG program:

  • Compiler Directives: You need to tell the RPG compiler that your program contains embedded SQL. You do this using compiler directives. The most common approach is to use the EXEC SQL preprocessor.

**FREE  // For free-form RPG
CTL-OPT BNDDIR('QC2LE'); // Bind directory for SQL services
/COPY QSYSINC/SQLC  // Include SQL definitions (important!)?

 

2. Embedding SQL Statements :

You embed SQL statements within your RPG code using the EXEC SQL keywords .

EXEC SQL SELECT * FROM CUSTOMERS;

EXEC SQL INSERT INTO ORDERS (CustNo, OrderDate) VALUES (:CustNo, :OrderDate);

* Host Variables : To pass data between your RPG program and the SQL statements, you use host variables. These are RPG variables preceded by a colon (:) in the SQL statement. They act as placeholders for data.

DCL-S  CustNo       INT(10);
DCL-S  CustName     CHAR(20);
DCL-S  OrderDate    D;

EXEC SQL SELECT CustName INTO :CustName FROM CUSTOMERS WHERE CustNo = :CustNo;


3. Handling SQL Results:

  • INTO Clause: The INTO clause in a SELECT statement specifies the host variables where the retrieved data will be stored.
  • Indicators: You can use indicator variables to handle null values. An indicator variable is an integer variable associated with a host variable. It's set to -1 if the corresponding data value is null.
DCL-S  CustName     CHAR(20);
DCL-S  CustNameInd  INT(5);  // Indicator variable

EXEC SQL SELECT CustName INTO :CustName :CustNameInd FROM CUSTOMERS WHERE CustNo = :CustNo;

IF CustNameInd < 0;
  // CustName is null
ELSE;
  // CustName has a value
ENDIF;?

* SQLCA (SQL Communication Area): The SQLCA is a structure that contains information about the execution of SQL statements, including error codes. It's essential for error handling.

DCL-S  SQLCA      SQLCA; // Include the SQLCA

EXEC SQL SELECT * FROM CUSTOMERS;

IF SQLCA.SQLCODE <> 0;
  // Handle the error (SQLCA.SQLCODE contains the error code)
  Dsply ('SQL Error: ' + %char(SQLCA.SQLCODE));
ENDIF;

 

4. Working with Multiple Rows (Cursors):

If your SELECT statement can return multiple rows, you need to use a cursor.

// Declare the cursor
EXEC SQL DECLARE CustCursor CURSOR FOR SELECT * FROM CUSTOMERS;

// Open the cursor
EXEC SQL OPEN CustCursor;

// Fetch rows from the cursor in a loop
DOWHILE SQLCA.SQLCODE = 0;
  EXEC SQL FETCH CustCursor INTO :CustNo, :CustName;
  IF SQLCA.SQLCODE = 0; // Check after each fetch
    // Process the retrieved data
    Dsply (CustName);
  ENDIF;
ENDDO;

// Close the cursor
EXEC SQL CLOSE CustCursor;

 

5. Dynamic SQL:

For more flexibility, you can use dynamic SQL, where the SQL statements are built at runtime.

DCL-S  SqlStmt     VARCHAR(200);

SqlStmt = 'SELECT * FROM ' + TableName; // Build the SQL statement

EXEC SQL PREPARE MyStmt FROM :SqlStmt;
EXEC SQL EXECUTE MyStmt;


6. Error Handling:

Always check the SQLCA.SQLCODE after each SQL statement to handle potential errors.


7. Examples:

  • Inserting Data :
  • EXEC SQL INSERT INTO ORDERS (CustNo, OrderDate, OrderAmt) VALUES (:CustNo, :OrderDate, :OrderAmt);
    IF SQLCA.SQLCODE <> 0;
      // Handle insert error
    ENDIF;
  •  Updating Data :
  • EXEC SQL UPDATE CUSTOMERS SET CustName = :NewName WHERE CustNo = :CustNo;
    IF SQLCA.SQLCODE <> 0;
      // Handle update error
    ENDIF;

Key Considerations :

  • Binding: The process of associating RPG variables with SQL host variables is called binding.
  • Compile Options: Ensure your compile options are set correctly for embedded SQL.
  • SQL Preprocessor: The EXEC SQL statements are processed by a preprocessor before the RPG compiler runs.
  • Performance: Be mindful of performance, especially when working with large datasets. Use indexes effectively.

Embedded SQL is a crucial technique for integrating database access into your RPG programs. By mastering its use, you can create powerful applications that leverage the full capabilities of DB2 for i.

 

 

Triggers in DB2 are a powerful mechanism that allows you to automatically execute a set of SQL statements when a specific event occurs on a table. Think of them as event-driven actions that help you enforce business rules, maintain data integrity, and automate tasks.

Here's a breakdown of how triggers work and how to implement them in DB2:

What are Triggers?

  • Event-driven: Triggers are activated by events like inserting, updating, or deleting data in a table.
  • Automatic execution: Once defined, triggers automatically execute whenever the associated event occurs. You don't need to explicitly call them.
  • Enforce rules: Triggers can be used to enforce complex business rules that might be difficult to implement with constraints alone.
  • Maintain data integrity: They can ensure data consistency across related tables.
  • Automate tasks: Triggers can automate tasks like auditing changes or generating notifications.

Types of Triggers:

DB2 supports different types of triggers based on when they are activated:

  • BEFORE triggers: Execute before the triggering event (insert, update, or delete) takes place. Useful for validating data or modifying it before it's stored.
  • AFTER triggers: Execute after the triggering event. Useful for actions like updating related tables or logging changes.
  • INSTEAD OF triggers: Execute instead of the triggering event. Primarily used for views to allow modifications that wouldn't be directly possible otherwise.

Implementing Triggers in DB2:

You create triggers using the CREATE TRIGGER statement. Here's the basic syntax :

CREATE TRIGGER trigger-name
  {BEFORE | AFTER | INSTEAD OF} {INSERT | UPDATE | DELETE}
  ON table-name
  [REFERENCING {OLD ROW AS old-row-name} {NEW ROW AS new-row-name}]
  [FOR EACH {ROW | STATEMENT}]
  trigger-body

 

Let's break down the key parts :

  • trigger-name: The name you give to your trigger.
  • BEFORE | AFTER | INSTEAD OF: Specifies when the trigger executes.
  • INSERT | UPDATE | DELETE: The event that activates the trigger.
  • table-name: The table on which the trigger is defined.
  • REFERENCING: Allows you to refer to the old and new values of the row being modified (useful for BEFORE and AFTER triggers).
  • FOR EACH {ROW | STATEMENT}:
    • ROW: The trigger executes for each row affected by the triggering event.
    • STATEMENT: The trigger executes once for the entire SQL statement, even if it affects multiple rows.
  • trigger-body: The SQL code that will be executed when the trigger is activated.

Example:

Let's say you want to ensure that the OrderDate in your ORDERS table is never in the future. You can create a BEFORE trigger like this :

CREATE TRIGGER CheckOrderDate
  BEFORE INSERT ON ORDERS
  REFERENCING NEW ROW AS newOrder
  FOR EACH ROW
BEGIN
  IF newOrder.OrderDate > CURRENT DATE THEN
    SIGNAL SQLSTATE '70001' SET MESSAGE_TEXT = 'Order date cannot be in the future.';
  END IF;
END;


This trigger checks the OrderDate before a new row is inserted. If it's in the future, the trigger raises an error, preventing the insertion.

31 .
Explain the difference between LEFT JOIN and INNER JOIN in AS400 SQL.

You're asking about two fundamental types of joins in SQL, and understanding the difference is crucial for retrieving the correct data from your database. Here's a breakdown of LEFT JOIN and INNER JOIN in AS400 SQL (DB2 for i):

INNER JOIN

  • Matching Rows Only: An INNER JOIN returns only rows where there is a match in both tables based on the join condition. If a row in either table doesn't have a corresponding match in the other table, it's excluded from the result.
  • Example: Imagine you have two tables: Customers and Orders. An INNER JOIN on CustomerID would only return customers who have placed orders, and only those orders placed by customers in the Customers table.
SELECT c.CustomerID, c.Name, o.OrderID
FROM Customers c
INNER JOIN Orders o ON c.CustomerID = o.CustomerID;?


LEFT JOIN

  • All Rows from Left Table: A LEFT JOIN (or LEFT OUTER JOIN) returns all rows from the left table (the table specified first in the FROM clause), and the matching rows from the right table.
  • NULLs for No Match: If a row in the left table doesn't have a match in the right table, the columns from the right table will have NULL values in the result.
  • Example: Using the same Customers and Orders tables, a LEFT JOIN would return all customers, even those who haven't placed any orders. For customers without orders, the OrderID column would be NULL.
SELECT c.CustomerID, c.Name, o.OrderID
FROM Customers c
LEFT JOIN Orders o ON c.CustomerID = o.CustomerID;?

 

Key Differences Summarized :

Feature INNER JOIN LEFT JOIN
Rows Returned Only matching rows from both tables All rows from the left table, matching rows from the right table (or NULLs if no match)
Use Case When you need data only when it exists in both tables When you need all data from one table, and matching data from another table if available

User profiles are fundamental to security on AS400 (now IBM i). They define who can access the system and what they can do. Here's a breakdown of how to create and manage them:

1. Creating User Profiles

  • CRTUSRPRF Command: The primary way to create a user profile is with the CRTUSRPRF command. You'll need security administrator (*SECADM) or security officer (*SECOFR) special authority to do this.

    CRTUSRPRF USRPRF(username) PASSWORD(password) USRCLS(userclass) ... 
    
    • USRPRF: The name of the user profile (e.g., SMITHJ).
    • PASSWORD: The user's initial password.
    • USRCLS: The user class (e.g., *USER, *PGMR, *SYSOPR, *SECADM, *SECOFR). This determines their base level of authority.
    • There are many other parameters to customize the profile, such as initial program, menu, library list, and more.
  • WRKUSRPRF Command: You can also create a user profile using the WRKUSRPRF command. This provides a menu-driven interface to work with user profiles.

2. Managing User Profiles

  • CHGUSRPRF Command: Use the CHGUSRPRF command to modify existing user profiles. You can change passwords, user class, library lists, and other attributes.

    CHGUSRPRF USRPRF(username) PASSWORD(newpassword) ...
    
  • DSPUSRPRF Command: The DSPUSRPRF command displays the details of a user profile.

    DSPUSRPRF USRPRF(username)
    
  • DLTUSRPRF Command: Use the DLTUSRPRF command to delete a user profile.

    DLTUSRPRF USRPRF(username)
    
  • WRKUSRPRF Command: As mentioned earlier, WRKUSRPRF provides a menu-driven interface to work with user profiles. You can use it to create, change, display, copy, or delete user profiles.

3. Key Concepts

  • User Classes: AS400 has predefined user classes that represent different levels of authority:

    • *USER: Basic user with limited access.
    • *PGMR: Programmer with authority to develop and test programs.
    • *SYSOPR: System operator with authority to manage system operations.
    • *SECADM: Security administrator with authority to manage system security.
    • *SECOFR: Security officer with the highest level of authority.
  • Special Authorities: In addition to user class, you can grant specific special authorities to a user profile. These authorities allow users to perform specific tasks, such as managing objects, controlling jobs, or auditing security events.

  • Object Authority: User profiles are granted authority to access and manipulate objects (files, programs, etc.). You can grant different levels of authority, such as *READ, *UPDATE, *DELETE, *EXECUTE, and *ALL.

  • Group Profiles: You can create group profiles to group users together and grant them common authorities. This simplifies user management.

4. Security Best Practices

  • Strong Passwords: Enforce strong password policies, including minimum length, complexity requirements, and password expiration.
  • Principle of Least Privilege: Grant users only the authorities they need to perform their job duties.
  • Regular Audits: Regularly audit user profiles and their authorities to ensure they are appropriate.
  • Monitor User Activity: Monitor user activity for suspicious behavior.
  • Use Group Profiles: Use group profiles to simplify user management and ensure consistent authorities.
33 .
What is the role of WRKSPLF and WRKOUTQ?

WRKSPLF and WRKOUTQ are essential commands in IBM i (formerly AS400) for managing spooled files and output queues. Here's a breakdown of their roles:

WRKSPLF (Work with Spooled Files)

  • Purpose: WRKSPLF allows you to work with spooled files. Spooled files are essentially the output generated by programs, reports, or commands that are destined for a printer or other output device. Think of them as print jobs waiting to be processed.
  • Functionality:
    • Displaying spooled files: WRKSPLF shows a list of spooled files, allowing you to filter by user, printer, form type, and more.
    • Managing spooled files: You can perform actions on spooled files, such as:
      • Displaying: View the contents of a spooled file.
      • Printing: Send a spooled file to a printer.
      • Holding/Releasing: Control the printing of a spooled file.
      • Deleting: Remove unwanted spooled files.
      • Changing attributes: Modify properties of a spooled file (e.g., output queue, priority).
      • Copying: Create copies of spooled files.
      • Moving: Transfer spooled files to different output queues.
  • Usage: You can use WRKSPLF to:
    • Troubleshoot printing problems: See if a print job is stuck or has errors.
    • Manage print output: Control the order and timing of printing.
    • View reports: Access and review generated reports.

WRKOUTQ (Work with Output Queues)

  • Purpose: WRKOUTQ allows you to manage output queues. Output queues are objects that hold spooled files until they are sent to a printer or other output device. They act as a buffer between programs and printers.
  • Functionality:
    • Displaying output queues: WRKOUTQ shows a list of output queues, their status (e.g., open, closed, held), and the number of spooled files they contain.
    • Managing output queues: You can perform actions on output queues, such as:
      • Opening/Closing: Control whether an output queue is accepting new spooled files.
      • Holding/Releasing: Pause or resume the processing of spooled files in a queue.
      • Clearing: Remove all spooled files from a queue.
      • Changing attributes: Modify properties of an output queue (e.g., printer association, priority).
  • Usage: You can use WRKOUTQ to:
    • Manage printer resources: Ensure that print jobs are routed to the correct printers.
    • Control print flow: Prioritize or delay the printing of certain jobs.
    • Troubleshoot output queue issues: Resolve problems with output queues that are not functioning correctly.

Key Differences Summarized:

Feature WRKSPLF WRKOUTQ
Focus Spooled files Output queues
Actions Manage individual spooled files Manage output queues and their contents
Use Cases Troubleshooting print jobs, managing print output, viewing reports Managing printer resources, controlling print flow, troubleshooting output queue issues
34 .
What is the purpose of CHGJOB?

The CHGJOB command in IBM i (formerly AS400) allows you to change the attributes of a currently running job. It's a powerful tool for dynamically adjusting job characteristics without having to end and resubmit the job.

Here's a breakdown of its purpose and key uses:

Purpose:

CHGJOB lets you modify various aspects of a job while it's active. This can be crucial for:

  • Responding to changing conditions: If a job is running longer than expected or requires more resources, you can use CHGJOB to adjust its priority, time slice, or other attributes.
  • Managing system resources: You can dynamically allocate or deallocate resources to a job, such as changing its memory allocation or the libraries it uses.
  • Correcting errors: In some cases, you might be able to correct errors without ending the job by changing its attributes or environment.
  • Controlling job flow: You can change the job's status (e.g., hold, release) or its routing data to influence how it's processed by the system.

Key Parameters and Uses:

CHGJOB has many parameters, allowing you to modify a wide range of job attributes. Here are some of the most commonly used ones:

  • JOB: Specifies the job you want to change. You can identify the job by its name, user, and number. If you omit the JOB parameter, it defaults to the currently running job.
  • JOBPTY: Changes the job's priority. A lower number indicates a higher priority.
  • OUTPTY: Changes the output priority of spooled files created by the job.
  • TIMLMT: Sets or changes the job's time limit.
  • CURLIB: Changes the job's current library.
  • INLLIBL: Changes the job's initial library list.
  • JOBQ: Changes the job queue the job is associated with.
  • STATUS: Changes the job's status (e.g., HLD for hold, RLS for release). This is useful for pausing and resuming jobs.
  • RTGDTA: Changes the job's routing data, which can influence how it is routed to subsystems.
  • TEXT: Changes the job's description.

Examples:

  1. Changing Job Priority:

    CHGJOB JOB(MYJOB) JOBPTY(3)
    

    This command changes the priority of the job named MYJOB to 3.

  2. Holding a Job:

    CHGJOB STATUS(HLD)
    

    This command puts the currently running job on hold.

  3. Changing the Current Library:

    CHGJOB CURLIB(NEWLIB)
    

    This command changes the current library for the currently running job to NEWLIB.

  4. Setting a Time Limit:

    CHGJOB TIMLMT(30) /* 30 seconds */
    

    This command sets a time limit of 30 seconds for the currently running job.

ou're asking about the core save commands on IBM i (formerly AS400), used for backing up your system and data. Here's a breakdown of SAVOBJ, SAVLIB, and SAVSYS:

SAVOBJ (Save Object)

  • Granular Control: SAVOBJ allows you to save individual objects or specific groups of objects within a library. This gives you very fine-grained control over what you're backing up.
  • Use Cases:
    • Saving specific programs or files that have been modified.
    • Backing up critical objects without saving an entire library.
    • Archiving individual objects.
  • Example:
    SAVOBJ OBJ(MYPROG) LIB(MYLIB) SAVF(MYLIB/SAVEFILE1)
    
    This command saves the program MYPROG from library MYLIB to the save file SAVEFILE1 in MYLIB.

SAVLIB (Save Library)

  • Library-Level Backup: SAVLIB is used to save an entire library, including all the objects within it (programs, files, data areas, etc.).
  • Use Cases:
    • Regularly backing up libraries to protect against data loss.
    • Creating copies of libraries for testing or development.
    • Moving libraries to another system.
  • Example:
    SAVLIB LIB(MYLIB) SAVF(MYLIB/SAVEFILE2)
    
    This command saves the entire library MYLIB to the save file SAVEFILE2 in MYLIB.

SAVSYS (Save System)

  • Full System Backup: SAVSYS is the most comprehensive save command. It backs up the entire system, including the operating system, licensed programs, user profiles, configuration objects, and all user data.
  • Use Cases:
    • Disaster recovery: Creating a complete backup of the system in case of a major failure.
    • System upgrades: Backing up the system before applying updates or installing new software.
    • System migration: Moving the entire system to another machine.
  • Important Notes:
    • SAVSYS must be run in a restricted state, meaning that most users and applications need to be offline.
    • It's typically performed during off-peak hours or scheduled downtime.
    • It's crucial for a complete and restorable backup of your IBM i environment.
  • Example:
    SAVSYS DEV(TAP01)
    
    This command saves the entire system to the tape device TAP01.

Key Differences Summarized:

Feature SAVOBJ SAVLIB SAVSYS
Scope Individual objects or groups of objects Entire library Entire system
Granularity Very fine-grained Library level System level
Use Cases Specific object backups, archiving Library backups, copies, migrations Disaster recovery, system upgrades, migrations
State Can be run online (for most objects) Can be run online (for most libraries) Must be run in a restricted state

Troubleshooting slow jobs on IBM i (AS400) requires a systematic approach. Here's a breakdown of common causes and how to investigate them:

1. Identify the Slow Job

  • WRKACTJOB: Use the WRKACTJOB command to display active jobs. Look for jobs that have been running for an unusually long time or have a high CPU percentage.
  • Job Logs: Check the job log of the suspected job for any error messages, long-running SQL queries, or other clues about the cause of the slowdown.

2. Common Causes and Solutions

  • CPU Bottleneck:

    • Cause: The job might be CPU-intensive, and the system's CPU might be overloaded.
    • Investigation:
      • Check CPU utilization using WRKACTJOB or performance monitoring tools.
      • Identify other CPU-intensive jobs that might be competing for resources.
    • Solutions:
      • Increase the job's priority if it's critical.
      • Optimize the job's code to reduce CPU usage (e.g., improve algorithms, reduce unnecessary calculations).
      • If the system is generally overloaded, consider upgrading the CPU or adding more processing power.
  • Memory Bottleneck:

    • Cause: The job might require more memory than is available, leading to excessive paging and poor performance.
    • Investigation:
      • Monitor memory usage and paging activity using performance monitoring tools.
      • Check the job's memory pool to see if it's experiencing high faulting rates.
    • Solutions:
      • Increase the amount of memory allocated to the job's memory pool.
      • Reduce the memory demands of the job (e.g., process data in smaller chunks).
      • If the system is generally low on memory, consider adding more RAM.
  • I/O Bottleneck:

    • Cause: The job might be waiting for data to be read from or written to disk, and disk I/O might be slow.
    • Investigation:
      • Monitor disk I/O activity using performance monitoring tools.
      • Check for disk contention or slow disk drives.
    • Solutions:
      • Optimize database queries (e.g., use indexes effectively).
      • Reduce the amount of data being read or written.
      • Consider upgrading to faster disk drives or using RAID to improve disk performance.
  • Lock Contention:

    • Cause: The job might be waiting for a lock on a file or database record that is held by another job.
    • Investigation:
      • Use the DSPRCDLCK command to display record locks and identify which job is holding the lock.
    • Solutions:
      • If possible, reschedule jobs to avoid lock contention.
      • Optimize application logic to reduce the duration of locks.
  • Network Bottleneck:

    • Cause: If the job involves network communication, network latency or bandwidth limitations can cause slowdowns.
    • Investigation:
      • Monitor network traffic and identify any network congestion.
      • Check for network errors or connectivity issues.
    • Solutions:
      • Upgrade network infrastructure or increase bandwidth.
      • Optimize network communication protocols.
  • Database Issues:

    • Cause: Slow SQL queries, missing indexes, or database configuration issues can affect job performance.
    • Investigation:
      • Analyze SQL queries used by the job using tools like Visual Explain.
      • Check for missing or inefficient indexes.
      • Review database configuration settings.
    • Solutions:
      • Optimize SQL queries (e.g., add indexes, rewrite queries).
      • Tune database parameters.
  • Software Issues:

    • Cause: Bugs in the application code or inefficient algorithms can lead to slow performance.
    • Investigation:
      • Review the application code for potential performance bottlenecks.
      • Use debugging tools to trace the execution of the job.
    • Solutions:
      • Fix bugs in the code.
      • Optimize algorithms and data structures.

3. Performance Monitoring Tools

  • Performance Monitor: Use the Performance Monitor (part of IBM Navigator for i) to collect and analyze performance data. This tool can help you identify bottlenecks and track performance trends.
  • Collection Services: Enable Collection Services to gather detailed performance data that can be used for in-depth analysis.

4. Other Tips

  • Check for System Errors: Review system logs for any errors that might be affecting job performance.
  • Consider System Values: Some system values can influence job scheduling and resource allocation. Review system values related to performance.
  • Update the System: Ensure the system is running the latest PTFs (Program Temporary Fixes) to address any known performance issues.

Troubleshooting slow jobs often involves a process of elimination. Start by identifying the job, then systematically investigate potential causes, using the tools and techniques described above. Remember to document your findings and any changes you make to the system or application.

Backup and Restore on AS/400 (IBM i)

Backing up and restoring data on AS/400 (IBM i) is essential for system recovery, disaster preparedness, and maintaining business continuity. Below are the key methods, commands, and best practices for backup and restore operations.

1. Backup Methods in AS/400
A. SAV Commands (Save Commands)

IBM i provides various SAV* commands to back up different objects:

Command Purpose
SAVLIB Saves a library (including objects and data)
SAVOBJ Saves a specific object (e.g., files, programs)
SAVDLO Saves documents and folders
SAVCFG Saves system configuration
SAVSYS Saves entire system (Licensed Internal Code, QSYS library, OS, and user profiles)
SAVSECDTA Saves security data (user profiles, authorization lists, passwords, etc.)
SAVCHGOBJ Saves only changed objects since last backup

B. Full System Backup (GO SAVE) :

IBM i provides the GO SAVE menu for automated backup operations.

Command :

GO SAVE
  • Presents a menu with predefined backup options.
  • Common options:
    • Option 21: Full system backup (includes OS, libraries, and configurations).
    • Option 23: Backup user libraries.
    • Option 24: Backup only changed objects.

2. Backup Commands & Examples :
A. Backing Up an Entire Library (SAVLIB) :
SAVLIB LIB(MYLIB) DEV(TAP01) SAVACT(*YES)
  • Saves MYLIB to tape device TAP01.
  • SAVACT(*YES) allows saving active objects.
B. Backing Up a Specific Object (SAVOBJ) :
SAVOBJ OBJ(MYFILE) LIB(MYLIB) DEV(*SAVF) SAVF(MYLIB/MYBACKUP)
  • Saves MYFILE from MYLIB into a save file (MYBACKUP).
C. Backing Up Security & Configuration Data :
SAVSECDTA DEV(TAP01)
SAVCFG DEV(TAP01)
  • Saves user profiles, passwords, authorization lists, and configurations.
D. Backing Up Only Changed Objects (SAVCHGOBJ)
SAVCHGOBJ OBJ(*ALL) LIB(MYLIB) DEV(TAP01) REFERENCE(*LIB)
  • Saves only changed objects since the last reference backup.

3. Restore Methods in AS/400 :
A. RST Commands (Restore Commands) :

IBM i provides various RST* commands for restoring data:

Command Purpose
RSTLIB Restores an entire library
RSTOBJ Restores specific objects
RSTUSRPRF Restores user profiles
RSTCFG Restores system configurations
RSTSYS Restores the entire system (from SAVSYS backup)

B. Restore Commands & Examples :
A. Restoring an Entire Library (RSTLIB)
RSTLIB LIB(MYLIB) DEV(TAP01) MBROPT(*ALL) ALWOBJDIF(*ALL)
  • Restores MYLIB from TAP01 tape.
  • MBROPT(*ALL): Restores all members of files.
  • ALWOBJDIF(*ALL): Allows restoring even if object differences exist.
B. Restoring a Specific Object (RSTOBJ)
RSTOBJ OBJ(MYFILE) LIB(MYLIB) DEV(*SAVF) SAVF(MYLIB/MYBACKUP)
  • Restores MYFILE from MYBACKUP save file.
C. Restoring User Profiles (RSTUSRPRF)
RSTUSRPRF DEV(TAP01) USRPRF(*ALL)
  • Restores all user profiles from backup.
D. Restoring System Configuration (RSTCFG)
RSTCFG OBJ(*ALL) DEV(TAP01) OBJTYPE(*ALL)
  • Restores system configuration from backup.
E. Restoring the Entire System (RSTSYS)
RSTSYS DEV(TAP01)
  • Used during disaster recovery to restore the operating system and system libraries.
  • Usually done after an IPL (Initial Program Load).

4. Save File (SAVF) Backups (Disk-Based Backup)

Instead of using tapes, backups can be stored in a save file (SAVF) on disk.

A. Creating a Save File
CRTSAVF FILE(MYLIB/MYBACKUP)
B. Saving Data to a Save File
SAVLIB LIB(MYLIB) DEV(*SAVF) SAVF(MYLIB/MYBACKUP)
C. Restoring from a Save File
RSTLIB LIB(MYLIB) DEV(*SAVF) SAVF(MYLIB/MYBACKUP)

5. Automating Backups Using Job Scheduler

IBM i allows scheduling backups using job scheduler (WRKJOBSCDE).

Example : Schedule Nightly Library Backup
ADDJOBSCDE JOB(BACKUP) CMD(SAVLIB LIB(MYLIB) DEV(TAP01)) 
FRQ(*DAILY) SCDTIME(230000)
  • Runs a backup daily at 11:00 PM.

6. Best Practices for Backup & Restore

* Perform regular full backups (SAVSYS, SAVLIB, SAVSECDTA, SAVCFG).
* Use Save Files (SAVF) for faster, disk-based backups.
* Schedule incremental backups (SAVCHGOBJ) to save only changed objects.
* Verify backups using DSPTAP (Display Tape Contents).
* Perform test restores to ensure backup integrity.
* Store backups offsite for disaster recovery.
* Monitor backup jobs using WRKACTJOB.


7. How to Check Backup Status & Logs :
A. Check Tape Contents (DSPTAP)
DSPTAP DEV(TAP01) DATA(*SAVRST)
  • Displays what is stored on a tape.
B. Check System Logs for Backup Messages
DSPLOG PERIOD((*AVAIL *CURRENT))  
  • Shows errors or completion messages related to backup jobs.

Summary of Key Backup & Restore Commands :
Task Backup Command Restore Command
Entire Library SAVLIB LIB(MYLIB) DEV(TAP01) RSTLIB LIB(MYLIB) DEV(TAP01)
Specific Object SAVOBJ OBJ(MYFILE) LIB(MYLIB) DEV(*SAVF) RSTOBJ OBJ(MYFILE) LIB(MYLIB) DEV(*SAVF)
User Profiles SAVSECDTA DEV(TAP01) RSTUSRPRF DEV(TAP01) USRPRF(*ALL)
System Configuration SAVCFG DEV(TAP01) RSTCFG DEV(TAP01) OBJ(*ALL)
Entire System SAVSYS DEV(TAP01) RSTSYS DEV(TAP01)

CPF (Control Program Facility) messages are absolutely fundamental to understanding and managing IBM i (formerly AS400) systems. They're the primary way the system communicates with users and administrators about everything from routine operations to critical errors. Here's a breakdown of their significance:

What are CPF Messages?

  • System Communication: CPF messages are how the IBM i operating system, as well as applications running on it, report events, errors, warnings, and informational messages. They're the system's way of telling you what's happening.
  • Message IDs: Each CPF message has a unique identifier (e.g., CPF0001, CPF2812). These IDs are crucial for identifying specific messages and finding more information about them in the message file (QCPFMSG).
  • Message Text: Along with the ID, CPF messages have descriptive text that explains the event or issue.
  • Message Types: CPF messages have different types, indicating their severity:
    • Informational: Provide general information.
    • Diagnostic: Offer details to help diagnose problems.
    • Inquiry: Require a response from the user or operator.
    • Notify: Alert the user to a condition that might require attention.
    • Escape: Indicate a serious error that often requires the program to terminate.

Significance of CPF Messages:

  1. Error Handling: CPF messages are essential for handling errors in programs and commands. CL programs, for example, use the MONMSG command to monitor for specific CPF messages and take appropriate actions. RPG programs can also check for SQL errors indicated by CPF messages.

  2. System Monitoring: System administrators rely on CPF messages to monitor the health and status of the system. They can be configured to receive alerts for critical messages, allowing them to proactively address problems.

  3. Troubleshooting: When something goes wrong on the system, CPF messages are the first place to look for clues. They provide valuable information about the nature of the problem and can guide you towards a solution.

  4. Job Management: Job logs, which contain CPF messages related to the execution of a job, are crucial for understanding how a job ran and for troubleshooting any issues that occurred.

  5. Auditing: CPF messages can be audited to track system events and user activity. This is important for security and compliance.

  6. User Interaction: Inquiry messages, a type of CPF message, allow the system to interact with users, requesting input or confirmation.

Working with CPF Messages:

  • DSPMSG: The DSPMSG command is used to display messages in a message queue. You can use it to view system messages, job log messages, or messages sent to a specific user.
  • WRKMSG: The WRKMSG command provides a menu-driven interface for working with messages.
  • Message Files: CPF messages are stored in message files. QCPFMSG is the system message file. You can create your own message files for application-specific messages.
  • Message Queues: Messages are delivered to message queues. Each user profile has a message queue, and there are system message queues as well.
  • Monitoring: System administrators can set up message monitoring to be alerted to specific CPF messages.

Example:

If a program tries to open a file that doesn't exist, the system will issue a CPF2812 message ("File not found"). The program can use MONMSG to trap this message and take appropriate action (e.g., display an error message to the user, create the missing file, or end the program).

API calls on AS400 (now IBM i) serve the same fundamental purpose as they do in any other computing environment: to enable communication and interaction between different software components.

Here's a breakdown of why API calls are important on AS400:

1. Accessing System Resources and Functions:

  • System APIs: IBM i provides a rich set of system APIs that allow programs to access and manipulate system resources and perform various functions. These APIs provide a standardized way to interact with the operating system, hardware, and other system components.
  • Examples: APIs exist for tasks like:
    • Object management (creating, deleting, changing objects)
    • Spooled file management
    • User profile management
    • Job control
    • Network communication
    • Database access

2. Integrating with Other Applications:

  • Application Integration: APIs are essential for integrating AS400 applications with other systems, whether they are other AS400 applications or applications running on different platforms.
  • Data Exchange: APIs facilitate the exchange of data between different applications. For example, an AS400 application might use an API to send data to a web application or receive data from a mobile app.
  • Modernization: APIs play a crucial role in modernizing AS400 applications. By exposing the functionality of existing AS400 programs as APIs, you can make them accessible to modern web and mobile applications, allowing you to extend the life and value of your legacy systems.

3. Enhancing Functionality:

  • Extending Capabilities: APIs allow you to extend the capabilities of AS400 applications by leveraging functionality provided by other systems or services.
  • Example: An AS400 application might use an API to access a mapping service to display location information or integrate with a payment gateway to process online transactions.

4. Simplifying Development:

  • Code Reusability: APIs promote code reusability. Once an API is created, it can be used by multiple applications, reducing development effort and improving consistency.
  • Abstraction: APIs abstract away the complexities of the underlying system, making it easier for developers to build applications that interact with the AS400.

5. Security:

  • Controlled Access: APIs can provide a layer of security by controlling access to system resources and data. You can define who can use specific APIs and what data they can access.

How API Calls Work on AS400:

  • Calling Programs: AS400 programs (RPG, COBOL, CL) can make API calls to access system functions or interact with other applications.
  • Service Programs: APIs are often implemented as service programs on AS400. These service programs contain the code that implements the API's functionality.
  • Parameters: When making an API call, you typically pass parameters to the API to specify the data or options you want to use.
  • Return Values: APIs can return values to the calling program, such as status codes, data, or error messages.

Multi-threading in AS400 (now IBM i) allows a single job to execute multiple parts of its program concurrently. Think of it as having multiple workers within the same job, each handling a different task at the same time. This can significantly improve performance, especially for applications that can break down their work into independent pieces.

How Multi-threading Works on AS400

  • Jobs and Threads: In AS400, a "job" is a unit of work that the system manages. Traditionally, a job would run a single program or a sequence of programs. With multi-threading, a single job can now run multiple "threads." Each thread is like a lightweight process that can execute a part of the program independently.
  • Threads within a Job: Multiple threads share the same job environment, including memory and resources. This makes it efficient for them to communicate and share data.
  • Concurrency: The operating system manages the execution of these threads, switching between them rapidly to give the illusion of simultaneous execution. This concurrency allows the job to make progress on multiple tasks at the same time.

Benefits of Multi-threading

  • Improved Performance: By running multiple threads concurrently, a job can complete its work faster, especially if the tasks can be done in parallel.
  • Increased Throughput: Multi-threading can increase the overall throughput of the system by allowing it to handle more work in the same amount of time.
  • Responsiveness: For interactive applications, multi-threading can improve responsiveness by allowing the user interface to remain active while background tasks are being performed.

How Multi-threading is Handled on AS400

  • Operating System Support: IBM i provides operating system support for creating and managing threads. The system handles the scheduling and execution of threads, ensuring that they have fair access to resources.
  • Programming Languages: Programming languages like RPG, COBOL, and C support multi-threading through APIs or language extensions.
  • Pthreads: The most common way to implement multi-threading in ILE RPG is by using the POSIX Threads (Pthreads) API. This provides a standardized way to create, manage, and synchronize threads.
  • Thread-safe Programming: When writing multi-threaded applications, it's crucial to ensure that the code is "thread-safe." This means that multiple threads can access and modify shared data without causing conflicts or errors. Proper synchronization mechanisms (like mutexes and semaphores) are needed to protect shared resources.

Considerations for Multi-threading

  • Complexity: Multi-threading adds complexity to application development. It requires careful planning and design to avoid issues like race conditions, deadlocks, and data corruption.
  • Debugging: Debugging multi-threaded applications can be more challenging than debugging single-threaded applications due to the concurrent execution of threads.
  • Overhead: There is some overhead associated with creating and managing threads. It's important to consider this overhead when designing multi-threaded applications.