SAP Basis

BASIS Responsibilities

The SAP Basis Administrator is responsible for the management of the SAP environment. The SAP Basis Administrator responsibilities include configuring, monitoring, tuning, and troubleshooting the SAP technical environment on an ongoing basis as well as scheduling and executing the SAP transport system. The SAP Basis Administrator collaborates to resolve SAP transport and source code problems. The SAP Basis Administrator is responsible for the installation, upgrade, and maintenance of SAP systems. Additional areas include the evaluation and design of interfaces between SAP and external systems, maintenance of the SAP Data Dictionary and database objects, and manage the migration of SAP database and application configuration into production as well as analyze, develop, and maintain data architectures and process models within SAP.

A key area of responsibility is the documentation and updating of the existing SAP environment and work with IT and business units to modernize the SAP environment.

The SAP Basis Administrator must possess the ability to analyze situations and provide problem resolution. Excellent written and oral communication skills are a requirement.

Example activities include:

1. Implement and maintain the multiple SAP instances that comprise the SAP environment (development, test, training and production).

2. Maintain the integrity of the SAP environment by managing the SAP Correction and Transport System (CTS) to ensure all configuration and development objects are promoted properly.

3. Introduce technical changes into the environment using a structured approach that minimizes risk and achieves high reliability, availability and performance of each SAP instance.

4. Design and implement an optimal SAP configuration to maximize system performance and availability.

5. Install and configure all required SAP database servers and application servers.

6. Manage SAP users, authorizations, and profiles.

7. Distribute the online SAP user workload and monitor and manage the SAP background job workload.

8. Configure and manage the SAP printing subsystem for all SAP instances.

9. Maintain SAP performance by planning and executing SAP tuning strategies.

10. Monitor all SAP systems (work processes, users, system logs, short dumps, locks, developer traces, system traces, disk space, etc.).

11. Administer the SAP database with Database Administrator (plan and perform database upgrades, apply database maintenance, design and maintain physical database layout, perform database reorganizations, design and implement backup and restore strategy, maintain database security, administer database performance, manage database storage, database problem determination and resolution, etc.).

12. Perform SAP client administration (create client, copy client, delete client, export/import client) as required.

13. Participate in the planning and implementation of SAP system upgrades.

14. Apply and migrate SAP maintenance (hot packages and kernel upgrades) through all systems using a structured methodology.

15. Develop and maintain system documentation for all SAP instances and interfaces.

16. Provide status reports for projects to management.


SAP BASIS ADMIN Roles & Responsibilities

I )  Administration includes user admin, Client admin, and backup in SAP environments.

  • He should be able to do user administration like creating and deleting users, assigning and resetting passwords, locking and unlocking users.
  • He should be able to troubleshoot security or authorization problems using SU53, ST01 and SUIM
  • He should be able to create roles using different methods like transactions, direct objects, missing authorizations, restrictions…etc
  • He should be able to analyze and fix missing authorizations
  • He should be able to do client administration like local client copies, remote client copy, create and deleting clients.
  • He should be able to create and restore data backups
  • He should be able to do printer or spool configuration and administration
  • He should be able to manage the database space allocation
  • How to Copying the one user
    Use transaction SU01 or, from the System Administration Assistant   Running Your Display Tasks (transactionSSAA), choose Entire view. SAP System Administration à Additional Administration Tasks System Users: Copying a User.
    In transaction SU01, enter ADMIN##, choose Copy, then enter the name BASIS##. Deselect Authorization profiles and Activity groups. Enter a new password for BASIS## twice and save.
  • What is the total number of clients supported per SAP system?
    •  It’s from 000 to 999. Total 1000 clients are supported per SAP System.
User Admin(User Roles, Profiles, Activity Groups and Authorizations) Client Admin Backup
SU01– User Maintenance( Create new user, delete ,lock,Copy Users)

SU01D-User Display

SU02-  Maintain Authorization Profiles

SU03 – Maintain Authorizations

SU05-Maintain Internet users

SU10 – User Mass Maintenance/locks

SMLG – Maintain Logon Group

SUPC -Profiles for activity groups

SUIM–  Info system Authorizations, roles comparison

PFCG-Profile Generator(Activity Group Maintenance)

PFUD – User Master Data Reconciliation

SM19 -Security Audit Configuration(Trace a User’s Activity)

SSAA/SU01 –Copying the one user

To disable multiple user logins within the same client implement this parameter in the instance profile

login/disable_multi_gui_login = 1

Availability of SAP Instance & Application:

SM52,SM21,SRZL, SM50, SM04, SM12, SM13, ST22, SM37, and SP01.

Table Maintenance ( use SA38 or Choose system service)

·         To copy tables across clients, invoke RSCLTCOP

·         To make table adjustments across clients, RSAVGL00

·         To invoke the Substitution/Validation utility, invoke RGUGBR00

·         To transport SAP script files across systems, RSTXSCRP

·         To release batch-input sessions automatically.RSBDCSUB

·         RSMI3001 – deleting cancelled Update Records

·         RSPO0041 – Obsolete spool Objects

·         RSPO0043 – Spool lists Which are remnants of cancelled by job

·         RSUSR003 Check the passwords of users SAP* and DDIC in all clients

·         RSUSR006 List users last login

List of inactive users logs – se38-RSUSR200

Incorrect SAP login logs –RSUSR006

SCC3- Checking Client Copy Log

SCC4-Client Administration( New client Creation)

SCC5-Client Delete

SCC7-Client Import Post-Processing

SCC8- Client Export

SCCL- Local Client Copy with in the same system

SCC9-Remote client copy( Copying the clients and system)

How to Lock / Unlock a Client

To lock or unlock a client in R/3 System, run the following function modules in : transaction se37

SCCR_LOCK_CLIENT ( to lock the client)

SCCR_UNLOCK_CLIENT (to unlock the client)

Locked Information

SM01 – lock/unlock transaction

SU01 -User accountslocked/unlocked (USR02 table ,uflage is 64-locked -> uflag is 0 unlocked )

How do you Lock/Unlock user in SAP?



To unlock an user use



View Locked Transactions– SM01 (You need to look in field CINFO, table TSTC, you can use either  SE11 or SE16 to browse the table contents )

SM12- Old lock entries

TO Lock/Unlock a Client to Prevent Logons-

– tp locksys <SID> pf=tpprofile

– tp unlocksys <SID> pf=tpprofile”

Scheduling of system maintenance jobs

·         RSBTCDEL Clean the old background job records

·         RSDBCREO Clean batch input session log

·         RSPO0041 Removing old spooling objects

·         RSSNAPDL Clean the old ABAP error dumps

·         Brtools & Database (EXP/IMP)

Brtools login:

cd d:\usr\sap\ser\sys\exe\run

set sapdata_home=d:\oracle\ser

set oracle_sid=ser


Brtools –v à to check brtools version

Brspace –f tbexport –t user02

Brtools – db backup

Brconnect  –u / -c –f cleanup

Brbackup  –u/-c force –t online –m all –p  -w use_dbv

-v D:\backup

Brconnect  -u/-c –f stats –t oradic_stats

Brbackup –u /

Brconnect –u/-c –f stats –t all –f  collect

·         DB12  SAP Backup Logs

Spool Management

SP01- Spool Output Cotroller

SP11-  TemSe directory

SP12-  TemSe Administration

SPAD- Spool Administration

Database Administration

AL02  Oracle DB Monitor

DB01  Analyze exclusive lockwaits

DB02  Analyze tables and indexes
DB13  Planning Calendar
DB15  Data Archiving: Database Tables

SM31  Table maintenance (viewing and download tables)

DB14   Database Monitor


ST04   Database alert logs and Perform.

Check the work ProcessSM50

Ps –elf | grep | grep dw

Ps –elf | grep | grep ms

Ps –elf | grep | grep sapos

How to Kill work process in SAP?

SM50 ,SM04 Or Kill -9

How to find Long Running SAP JobsSM37,ST05,STAT,STAD OR ST30

If you have a long running Job, how do you analyse?

You can analyze the long running job using transaction SE30

II ) Maintenance includes monitoring the servers, background jobs, system performance and avoiding bottlenecks in SAP environments.

  1. He should be able to monitor and manage the servers, background jobs, performacne of the system
  2. He should be able to monitor the status of work processes, application servers and system logs etc…
  3. He should be able to rectify any type of problem related to operating systems
  4. He should be able to configure SAP GUI at client computers
  5. He should be able to rectify minor networking problems
  6. He should have through understanding of IP address configurations and pinging concept
  7. He must able to troubleshoot any client or server problems
  8. He should be able to create RFCs and should be able to configure TMS (Transport Management System)
Monitoring the servers/System Background jobs
AL08  Current Active Users

AL18   OS filesystem alert ( df-k | more)

OS01    LAN check with ping

RZ01    Job Scheduling Monitor

RZ03    Presentation, Control SAP Instances

RZ08   SAP alert Monitor

RZ10   Maintenance of Sap profile Parameter

ST01    System Trace

ST02    Setups/Tune Buffers

ST04    Select DB activities

ST05    Performance trace

ST06    Operating System Monitor, ideal for analyzing the performance of the entire SAP technology stack.

ST07–  useful in reviewing end users logged into the entire system

ST10    Table call statistics

ST03 /ST03N   Performance, SAP Statistics, Workload Monitor

ST07     Application monitor

STAT   Local transaction statistics

STUN-   Performance Monitoring

SM51-   SAP System Log

CCMS – System Monitoring (RZ20)

SSAA- useful in conducting routine daily, weekly, and monthly systems administration functions

SMLG–  to monitor how well SAP’s logon load balancing is performing; use F5 to drill down into group-specific performance data

SM66- ideal for looking at system-wide

performance relative to processes executing on every application and batch server within an SAP system

SM12– SAP system log

ST22- to review ABAP dumps and therefore identify program errors (to aid in escalating such issues to the responsible programming team)

SM36  Background Job Scheduling

SM37  Background Job Monitoring

SM39  Job Analysis

SM49  Execute External OS commands

SM62  Maintain Events

SM64  Release of an Event

SM65  Background Processing Analysis Tool

SM69  Maintain External OS Commands

Job scheduling Stages:

Scheduled, Released, Active, Finished ,Cancelled

Transport Management System

STMS  Transport Management System

SE01    Transport and Correction System

SE06    Set Up Workbench Organizer

SE07    CTS Status Display

SE09    Workbench Organizer

SE10    Customizing Organizer

SE11    ABAP/4 Dictionary Maintenance

SE16    Data Browser

SE80    Repository Browser

SM30  Call View Maintenance

SM31 Table Maintenance

SCC1   Client Copy – Special Selection

STMS   Transport Management System

III ) Perform day to day BASIS admin responsibilities including troubleshooting, analyze load , alert monitor and Configuration     

Monitoring Alert Monitoring
AL08 Current Active Users
OS01 LAN check with ping
RZ01 Job Scheduling Monitor
RZ03 Presentation, Control SAP Instances
ST01 System Trace
ST02 Setups/Tune Buffers
ST04 Select DB activities
ST05 Performance trace
ST06 Operating System Monitor
ST10 Table call statistics
ST03 Performance, SAP Statistics, Workload
ST07 Application monitor
STAT Local transaction statistics
STUN Performance Monitoring (not available in R/3 4.6x)
AL01   SAP Alert Monitor

AL02    Database alert monitor

AL04  Monitor call distribution

AL05   Monitor current workload

AL16  Local Alert Monitor for Operat.Syst.

AL18  Local File System Monitor

RZ20  CCMS Monitoring


FILE    Cross-Client File Names/Paths

RZ04  Maintain Operation Modes and Instances

RZ10               Maintenance of Profile Parameters

RZ11               Profile parameter maintenance

SE93                Maintain Transaction Codes

SM63               Display/Maintain Operating Mode Sets

SPRO               Customizing: Initial Screen

SWU3              Consistency check: Customizing

IV ) Important Parameters & Tables

Profile Parameters for Client Login and password security (RZ10, RZ11) Important Tables





























To find an Instance Name      SVERS

To find OS platform   TSLE4

Check Table Space      RSORAT01

Check Table Extent    RSORATC5

User administration

User master     USR01

Logon data      USR02

User address data       USR03

User master authorizations      USR04

User Master Texts for Profiles (USR10)        USR11

User master: Authorizations   UST12

User master authorization values        USR12

Short Texts for Authorizations           USR13

Prohibited passwords  USR40

Objects                                    TOBJ

Authorization Object Classes TOBC

Profile Name for Activity Group        TPRPROF

Table for development user    DEVACCESS


Batch input queue    


Queue info definition APQI

Job processing          

Job status overview table        TBTCO

Batch job step overview        TBTCP



Spool: Print requests   TSP02

Runtime errors

Runtime errors            SNAP

Message control        

Processing programs for output         TNAPR

Message status            NAST

Printer determination  NACH

SBAT : BASIS System Tables         TSTCT : Transaction Code Texts

Daily monitoring Tcodes Top SAP BASIS Critical Admin Task
AL08    Current Active Users

SM12  Display and Delete Locks( lock entries)

SM13  Display Update Records( Check the Pending Updates)

SM21 To check the  System Logs

SM50  Work Process Overview

SM51  List of SAP Servers

SM66  System Wide Work Process Overview

ST22    To Check ABAP Dump /4 Runtime Error Analysis

ST01     System Trace

ST02     Setups/Tune Buffers

ST03N Workload overview

ST04     Select DB activities( Database Performance Analysis)

ST05     Performance trace

ST06     Operating System Monitor

ST10     Table call statistics

ST03     Performance, SAP Statistics, Workload

SU56    Analyze User Buffer

OS01    LAN check with ping

RZ01    Job Scheduling Monitor

RZ03    Presentation, Control SAP Instances

ST07     Application monitor

STAT  Local transaction statistics

SM35   Display Batch Jobs

SP12    Deleting Obsolete Temporary Objects and Reclaiming the Space

DB2OLD Checks the TBS Growth size

SM37 /36 To Check the background  status on previous day

DB13  To Assign  the Backup schedule

PFCG  Profile Generator( Role, Authorization)

1. SAP System R/3 System Status Check : Logon Test

2. Backup Management: DB12

3. Application Server Status Check: SM51

4. CCMS Alerts Check: RZ20

5. Work Process Status Check: SM51

6. Failed Updates Monitoring: SM13

7. System Log Review: SM21

8. Jobs Monitoring: SM37/SM35

9. Check for old locks SM12

10. Spool Administration SP01

11. Check for ABAP/Short dumps ST22

12. Work load Analysis: ST03/ST03N

13. Review buffer statistics ST02

14. Database Performance Analysis ST04

15. User Management SM04/AL08

16. Operating System Monitoring: OS06

17. SE38/SA38/SE16/SM30 – Sensitive T-Code

  1. Basis consultant should be able to handle the administration of sap including the installation, configuration and maintenance.
  2. Installation may include SAP R/3, ECC, Net weaver, Net Weaver components, Solution Manager etc..
  3. He should be able to do the sap license management( SLICENSE,SAPLICENSE –SHOW)
  4. He should be able to analyze the ABAP dumps
  5. He should be able to do system copies

SAP R/3 dispatcher and work processes         

Types of work processes:

Message Coordinates the communication between different instances of a single SAP R/3 system. Used for Logon  purpose and load balancing
Dispatcher Redirect the request from GUI client to free process
Dialog Interpreting the ABAP code and execute the business logic. Used for interactive online processing
Batch For Background jobs
Enqueue Single “Central Lock Management Service” that controls the locking mechanism between the different    application servers and the database.
Update Responsible for consistency in asynchronous data changes.
Gateway Used for transport of bigger amount of data between application servers as well as external (non SAP)   systems that communicate with SAP


Which process first connects to the database?

It’s a Message Server process that connects first to the database

Difference between Application server and Central Instance?
Application Server is just a dialog instance.
Central Instance is Dialog instance + Database Instance
What is the difference between clients 000 and 001?
Client 000 is the SAP source client, client 001 exists only on certain installations (e. g. solution Manager).
What is the difference between Sap lock and database lock?
A “SAP lock” is named “enqueue lock”, the enqueue is on a much higher level, e. g. a complete sales document is locked there whereas in the datbase usually only row locks exist. Since SAP runs on more database than Oracle (thanx god) one needed to have a mechanism, that is database independent and on a higher level.
What is Access method?
Access method is the way the output device is connected to SAP system. The access method is specified during the definition
What is the difference between ST02 and ST04 transaction monitoring?
ST02 is used only to monitor the memory related parameters like (buffer hit ratio, roll area, page area ) which in case on fulfilment will effect the performance of SAP.
ST04 we can completely do the database related monitoring like backup schedules, locks etc.
How to start & stop SAP Instance

NT- Windows UNIX
Startsap name=<sid> nr <system number>  sapdiahost =<hostname> Startsap db

Startsap r3


Startsap all

Stopsap name=<sid> nr <system number>  sapdiahost =<hostname> Stopsap r3

Stopsap db


Stopsap all

Before stopping SAP  System

Check status of User/Active Process

List Of Users : SM04,AL08

List of Active Process : SM50,SM66

Send a system Message : Sm02

Or use CCMS ( RZ03) – Control- start& stop

Security Management (FAQ)

  1. How to transport roles from Production to Development or Sandbox?

Goto PFCG and enter the role which you want to transfer to other system.

Goto utilities->Mass download it will ask the path where to download/save that role on local desktop give the  location and save it.

Next logon to the system where you want that particular role. Go to PFCG-> Role -> upload.

Give the path where the role is saved. it accepts and generates successfully

  1. How to check the missing authorisation for the user not having the option “su53″?

You can use Trace function, ST01, you can trace the user activity and from the log you can see the authorization missing.

Start an authorization trace using the ST01 transaction and carry out the transaction with a user who has full authorizations. On the basis of the trace, you can see which authorizations were checked.

  1. What is the difference between role and a profile?

Role and profile go hand in hand. Profile is bought in by a role. Role is used as a template, where you can add T-codes, reports….. Profile is one which gives the user authorization. When you generate a role, a profile is automatically created.

  1. What is the use of role templates?

User role templates are predefined activity groups in SAP consisting of transactions, reports and web addresses.

  1. What is the difference between single role & composite role?

A role is a container that collects the transaction and generates the associated profile. A composite role is a container which can collect several different roles.

  1. Is it possible to change role template? How?

Yes, we can change a user role template. There are exactly three ways in which we can work with user role templates

We can use it as they are delivered in sap

We can modify them as per our needs through pfcg

We can create them from scratch.

For all the above specified we have to use pfcg transaction to maintain them.

Please explain the personalization tab within a role.

Personalization is a way to save information that could be common to users, I meant to a user role… E.g. you can create SAP queries and manage authorizations by user groups. Now this information can be stored in the personalization tab of the role. (I supposed that it is a way for SAP to address his ambiguity of its concept of user group and roles: is “usergroup” a grouping of people sharing the same access or is it the role who is the grouping of people sharing the same access?)

  1. How to insert missing authorization? Ways?

su53 is the best transaction with which we can find the missing authorizations.and we can insert those missing authorization through pfcg.

  1. Someone has deleted users in our system, and I am eager to find out who. Is there a table where this is logged?

Debug or use RSUSR100 to find the info.

Run transaction SUIM and down its Change documents.

  1. How can i do a mass delete of the roles without deleting the new roles?

There is a SAP delivered report that you can copy, remove the system type check and run. To do a landscape with delete, enter the roles to be deleted in a transport, run the delete program or manually delete and then release the transport and import them into all clients and systems.


To used it, you need to tweak/debug & replace the code as it has a check that ensure it is deleting SAP delivered roles only. Once you get past that little bit, it works well.

  1. How to compare the roles where created or defined in two different systems?

For role comparison both the roles must be in the same system, in same client

Transaction code SUIM -> Comparison-> Roles

If the roles are in different system, then transport the role into one of the system and do comparison. If no transport connection defined then, you can use the upload and download option in the PFCG

Steps for Role Comparing:

  1. Run the t-code SUIM
  1. Go To Comparison and select the option of roles
  1. Click on Across systems option it will give option to select the sys name under Remote Comparison there enter the SYS ID between which system you want to do comparison and put the role name in compare role section then execute it will give you the result.
  1. If there is any difference between the t-codes it will b in red color otherwise in yellow.
  1. What is the procedure for creating new user which have all features define under SAP* user and which could allow me to make the configurations?

Creating new user with superuser authorizations.

  1. Goto SU01 –

username : sapuser


  1. In default settings, give


first name : sap

last name : user

  1. Goto next tab,

give initial password :1234

repeat password : 1234

  1. Goto profiles.

type- sap_all (say enter)

sap_new (say enter)

Then save….

See the message in status bar, (user created successfully)

  1. Login with the new user. change the password. now this user contains all superuser authorizations
  1. The administrator user cannot be used to log on to the J2EE Engine because it has been locked. How will you correct the situation?

To correct this situation, I had to use an emergency user account.

SAP* user account has full administrator authorizations, but this account doesn’t have a default password. It must be specified when account is activated. Once SAP* is activated, no other user can log in to the system.

Check properties on Config Tool (Edit UME):

– ume.superadmin.activated (set ‘true’);

– ume.superadmin.password (specify a password).

Restart Application Server.

You have all users locked onto ABAP system. How will you deal with this situation?

Make sure your login/no_automatic_user_sapstar profile value is set to 1.

Log on to host system and connect to database.

Use the following query:

– delete sid.USR02 where BNAME=’SAP*’ and MANDT=’xxx’;

Now SAP* user is generated again with default password “pass”.

  1. How would you copy all users from DEV to PRD?

Execute transaction SCC8 and select the profile SAP_USER. Then specify target system and schedule background job. This will export all users from the source system in the form of request.

Now login to the destination system and enter tcode SCC6. Specify the request number generated while exporting and click on “prepare import”.

You can check logs in SCC3 transaction.

Tablespace Coalesce

select a.tablespace_name, a.file_id, a.block_id, a.blocks,


from dba_free_space a, dba_free_space b

where a.tablespace_name = ‘SYSTEM’

and b.tablespace_name = ‘SYSTEM’

and a.tablespace_name = b.tablespace_name

and a.file_id = b.file_id

and a.block_id+a.blocks = b.block_id

alter tablespace USERS coalesce;




What are the advantages of Using SAP ?

SAP is a very complex software application and it is surprisingly easy to process user transactions like post an Invoice, post a Payment etc., and also this always you to customize the system(software) to your company needs. The main advantage is SAP is well integrated software between diff modules (like SAP- FI, CO, MM, SD, PP etc.,) means you will get common information at any point of time and you can reduce the communication time. SAP is a windows applications.

SAP NetWeaver Portal  offers a comprehensive portfolio of innovative solutions, based on best in class portal platform. The latest release SAP NetWeaver Portal 7.3 includes:

  • Enterprise-ready, highly scalable platform, with significantly enhanced capabilities and tools for end users, administrators and developers
  • Open, extendable platform for implementing intranet, extranet or mobile portal scenarios based on innovative add-on solutions, flexible APIs, industry standards
  • Flexible UI integration layer to interoperate and integrate business processes with on premise / on demand services

For more information about Portal capabilities see the Solution in Brief.

In the following sections you find information about the right tools and capabilities for each user group (end users, expert users, administrators and developers).

End Users – Consume and Collaborate Everywhere

SAP NetWeaver Portal – Boost user productivity through smart and intuitive tools.Easy collaboration and content sharing with:
  • Enterprise Workspaces: create your own pages and populate them with content.
  • Wikis: perform collaborative writing and save time by making ideas available, sharing knowledge and managing related information.
  • Forums. share knowledge by communicating, and proactively delivering relevant information to people who have similar interests.

More information on wikis and forums, see Collaboration.

Smart Access with Ajax Framework Page:

  • Harmonized, intuitive user interfaces design
  • Easy navigation with quick launch and tabsets
  • Quick access via favorites, history and personalization

End users can use Business Suite applications and services via more than 90 valuable Business Packages for SAP NetWeaver Portal.

Key or Expert Users – Deliver Your Message to Everyone

SAP NetWeaver Portal – Enabling key users to efficiently do their job.SAP NetWeaver Portal offers smart tools to support professionally managed content scenarios:

  • Web Content Management provides various web content management services for easily creating and managing pages in SAP NetWeaver Portal
  • Knowledge Management with KM enables key users to manage their documents in the portal
  • Mashups enable key users to quickly create interactive portal pages by connecting (wiring) various content items on a portal page
  • Flexible tools and APIs allow customers and partners to easily configure, extend and customize the solutions

Administrators – Run a High-Performance Portal with Near-Zero Downtime

SAP NetWeaver Portal – Providing powerful tools for managing and administrating your portal landscape:

  • Simplified content and system administration
  • Improved server performance & scalability
  • Reduced costs for portal operations and administration
  • Enhanced content sharing with SAP NetWeaver Portal
  • Improved performance & availability of the new Java server
  • Easier integration of applications with improved „Application Integrator” tool
  • Better tools for administration, monitoring und troubleshooting with SAP NetWeaver Administrator
  • Support Near Zero Downtime for Maintenance
  • Better transport management with CTS+
  • Apply your individual corporate design to the portal (branding)

For more information see SAP NetWeaver Portal Administration and Development.


Developers – Extend the Portal to Meet the Diverse Needs of Your Organization

SAP NetWeaver Portal – Extending your Portal based on open standards and interfaces.

  • Developer Studio:  Enhanced version of SAP NetWeaver Developer Studio (based on Eclipse 3.5)
  • Open framework for custom development and extensions using powerful portal APIs and Web Services
  • Web Standards Supports modern standards for security, development and portal interoperability such as Java EE5, EJB 3.0, JSP 2.1, SAML 1.1/2.0, JSR 168/28
  • Upgrade tools and how-to guides for application migration
  • Enhanced SAP NetWeaver Administrator for easier server administration: system and deployment status, logging, troubleshooting and error handling



3-Tier Architecture

SAP R/3 uses three-tier architecture.

  • R signifies Real-time system
  • 3 represents –  3-tier architecture.


SAP R/3 is a client server model, using 3-tiered architecture. The three layers are


Presentation Layer


Application Layer

Database layer


1) Presentation Layer: Presentation Layer provides means of: Input, allowing the users to manipulate the system Output, allowing the system to produce the results of user’s manipulation SAP is having Graphical User interface (SAP GUI). The SAP GUI is installed on Individual machines which act as presentation layer.


2) Application Layer: In this layer business logic is executed. The application layer can be installed on one machine, or it can be distributed among more than one system.


3) Database Layer: The database layer holds the data. SAP supports any relational database. SAP does not provide any database. But it supports any RDBMS. The database layer must be installed on one machine or system. Major databases which are being used in SAP implementations are Oracle, DB2.


SAP R/3 is written using its own programming language ABAP. Kernel is written using C language

User’s PC:-  Users can access SAP system in two ways:-

  1. Through SAP GUI
  2. Through Web browser


It’s called front-end. Only the front-end is installed in the user’s PC not the application/database servers.

Front-end takes the user’s requests to database server and application servers.

Application Servers:-  Application server is built to process business-logic. This workload is distributed among multiple application servers.With multiple application servers user can get the output more quickly.

Application server exists at a remote location as compared to location of the user PC.

Database Server:-Database server stores and retrieves data as per SQL queries generated by ABAP and java applications.

Database and Application may exist on the same or different physical location.

Understanding different SAP layers


Presentation Layer :

The Presentation Layer contains the software components that make up the SAPgui (graphical user interface). This layer is the interface between the R/3 System and its users. The R/3 System uses the SAPgui to provide an intuitive graphical user interface for entering and displaying data.

The presentation layer sends the user’s input to the application server, and receives data for display from it. While a SAPgui component is running, it remains linked to a user’s terminal session in the R/3 System.

Application Layer :

The Application Layer consists of one or more application servers and a message server. Each application server contains a set of services used to run the R/3 System. Theoretically, you only need one application server to run an R/3 System. In practice, the services are distributed across more than one application server. The message server is responsible for communication between the application servers. It passes requests from one application server to another within the system. It also contains information about application server groups and the current load balancing within them. It uses this information to assign an appropriate server when a user logs onto the system.

Database Layer :

The Database Layer consists of a central database system containing all of the data in the R/3 System. The database system has two components – the database management system (DBMS), and the databse itself. SAP has manufactured its own database named HANA but is compatible with all major databases such as Oracle.All R/3 data is stored in the database. For example, the database contains the control and customizing data that determine how your R/3 System runs. It also contains the program code for your applications. Applications consist of program code, screen definitions, menus, function modules, and various other components. These are stored in a special section of the database called the R/3 Repository, and are accordingly called repository objects. R/3 repository objects are used in ABAP workbench.

Understanding the components of SAP R/3 3-tier Architecture:-


ABAP+Java System Architecture

  1. Message Server :It handles communication between distributed Dispatchers in ABAP system.
  2.  Dispatcher Queue: Various workprocess types are stored in this queue.
  3. Dispatcher: It distributes requests to the workprocesses.
  4. Gateway:It enables communication between SAP system and between SAP system and external systems.
  5. ABAP-Workprocesses:- It separately executes dialog steps in R/3 applications.

    Types of workprocesses are given as below:-

  6. Memory-pipes: It enables communication between ICM and ABAP workprocesses.
  7. Message Server: It handles java dispatchers and server processes.It enables communication within java runtime environment.
  8. Enqueue Server:It handles logical locks that are set by the executed Java application program in a server process.
  9. Central Services:Java cluster requires a special instance of the central services for managing locks and transmitting messages and data. Java cluster is a set of processes that work together to build reliable system. Instance is group of resources such as memory, work processes and so on.
  10. Java Dispatcher: It receives the client requests and forwards to the server process.
  11. SDM: Software Deployment Manager is used to install J2EE components.
  12. Java Server Processes: It can processes a large number of requests simultaneously.
  13. Threading: Multiple Processes executes separately in the background , this concept is called threading.
  14. ICM: It enables communication between SAP system and HTTP,HTTPS,SMTP protocol. It means by entering system URL in the browser you can access SAP from browser also.


One more component is JCO. JCO is used to handle communication between java dispatcher and ABAP dispatcher when system is configured as ABAP+Java.

How the SAP Logon Process works?


Step 1) Once user click on the SAP system from GUI , User request is forwarded to Dispatcher .

Step 2) Request is stored in Request queues first. Dispatcher follows First in First out rule .It will find free workprocess and if available will be assigned.

Step 3) As per user request , particular workprocess is assigned to user.For example , when user login to the system then Dialog workprocess is assigned to the user.If user runs a report in background then background workprocess is assigned to the user.When some modifications are done at database level then update workprocess is assigned.So as per user’s action workprocess is assigned.

Step 4) Once user is assigned the dialog workprocess then user authorizations, user’s current setting are rolled in to work-process in shared memory to access user’s data.Once dialog step is executed then user’s data is rolled out from workprocess. Thus shared memory will be cleaned and other user’s data can be saved in shared memory area. Dialog step means the screen movements. In a transaction, when a users jumps from one screen to other the process is called a dialog step.

Step 5) First work process will find the data in the buffer. If it finds data in buffer then there is no need to retrieve data from database. Thus response time is improved and this process is called hit.If it does not find the data in buffer then it will find the data in database and this process is called miss. Hit ratio should be always higher than miss ratio. It improves the performance of system .

Step 6) Other requested data is queried from the database and once the process is complete,the result is sent back to GUI via dispatcher.

Step 7) At the end user’s data is removed from shared memory so the memory will be available to other users.This process is called roll-out.



SAP R/3 dispatcher and work processes

Types of work processes:
Message : Coordinates the communication between different instances of a single SAP R/3 system. Used for Logon purpose and load balancing.
Dispatcher : Redirect the request from GUI client to free process.
Dialog : Interpreting the ABAP code and execute the business logic. Used for interactive online processing.
Batch : For Background jobs.
Enqueue : Single “Central Lock Management Service” that controls the locking mechanism between the different application servers and the database.
Update : Responsible for consistency in asynchronous data changes.
Gateway : Used for transport of bigger amount of data between application servers as well as external (non SAP) systems that communicate with SAP.

To process SAP requests from several front ends, an SAP application server has a dispatcher, which collects the requests and forwards them to work processes for execution.


There are the following types of work process:




For executing dialog programs



For asynchronous database updates


Background (batch)

For executing background jobs



For executing lock operations



For print formatting


Work processes can be assigned to dedicated application servers. In the service overview (SM51), you can see which work process types are provided by the individual servers.



Workprocess Services

The work process over view can be obtained from SM50.

The type of wrk processes are Dialogue, Background, Update , Spool and so on.

Here are the following types of work processes:


Work Process Type


Dialog  —  Executes dialog programs (ABAP)


Update —  Asynchronous database changes (is controlled by a COMMIT WORK statement in a dialog work process)


Background — Executes time-dependent or event-controlled background jobs


Enqueue —  Executes locking operations (if SAP transactions have to synchronize themselves)


Spool —  Print formatting (to printer, file or database)



Several dialog work processes usually run on one application server. There are usually only one or two other types of work processes.

Work processes is a process, it process the given task. Work processes will work based on assigned SAP services.

SAP services are Dialogue, Background, Update , Spool, Enqueue, Gateway (ICM) and Message.


Work processes and its assigned services can be monitored with SM50 and SMICM.


More details about services:


Work Process service Type


Dialog — Executes dialog steps (programs)


Update (V1 and V2 or UPD and UP2) — Asynchronous database changes (is controlled by a COMMIT WORK statement in a dialog work process)


Background — Executes time-dependent or event-controlled background jobs


Enqueue — Mainly executes lock management


Spool — Print formatting (to printer, file or database


Gateway — executes external connectivity

(In Latest versions SAP Gateway service replaced with ICM. and can be monitored with transaction SAP ICM.



Step by Step Procedure for Downloading and Applying the SAP License Key

Step by Step Procedure for Downloading the SAP License Key:

  1. Login to the SAP service market place with the administrative S – user ID.
  2. Select the Data Administration tab and input System ID  (SID eg : AE1) and continue.
  3. Input the following values

System Name                     :  WINSAP08

System Type                      : Test System

Product                                 : SAP ERP

Product Version                                :  SAP ERP 6.0

Technical Usage                : SAP ERP Central Component (ECC 6.0)

Database                             : MS SQL Server

Operating System             : Windows

Planned Productive date:  Not Mandatory


  1. And after continuing the next screen, enter the Hardware Key , which you can obtain from SAP system through SLicense T-Code.
  2. Select the license type as Standard – Web Application Server ABAP or ABAP+JAVA.
  3. Select any one of the License type Standard or Maintenance Certificate and continue.
  4. Confirm the email ID for which the License will be sent. (Normally it takes 10 minutes to receive the License to the corresponding mail given)


Step by Step Procedure for Applying the SAP License Key:


  • Login to SAP system with the administrator user ID in any of the client. (License and Maintenance certificate can be applied from any client)
  • Take the SLICENSE t-code and you can see the temporary license, which will be available after the installation.
  • Select the New License from the menu and click Install.
  • Select the downloaded license  from the one you received in the mail and apply.
  • The new license will be successfully installed.


TMS – step-by-step example on how to setup and test TMS

For single system landscape..


Go 000, DDIC … STMS–> here it will ask for domain controller — give self domain —>   Domain controller has been assigned .

Now again  goto STMS> System overview>SAP SYSTEM >Create ->Virtual system– give name V<SID>

till this point your domain controller is created and virtual system is created .Next our aim is to setup STMS


again goto STMS>transport routes> change > configuration>statndard configuration —> select second option (development and productino System ) because here your dev system will your own system and virtual system would be production  system .


—> pop up will come -> give DEV – SID and Productino SID– it willl  automatically create the route .


Save this and exit.



This way We can set single system landscape.

TMS Configuration


  • TMS is the transport tool that assists the CTO for central management of all transport functions. TMS is used for performing:
    • Defining Transport Domain Controller.
    • Configuring the SAP system Landscape
    • Defining the Transport Routes among systems within the system Landscape
    • Distributing the configuration
  • Transport Domain Controller – one of the systems from the landscape that contains complete configuration information and controls the system landscape whose transports are being maintained jointly. For availability and security reasons, this system is normally the Productive system.


Within transport domain all systems must have a unique System Ids and only one of these systems is identified as the domain controller, the transport domain controller is the system where all TMS configuration settings are maintained. Any changes in to the configuration settings are distributed to all systems in the landscape. A transport group is one or more systems that share a common transport directory. Transport Domain – comprises all the systems and the transport routes in the landscape. Landscape, Group and Domain are the terms that are used synonymously by system administrators.

Step 1:Setting up the Domain Controller

  • Log on to the SAP system, which is decided to be the Domain Controller, in client 000 and enter the transaction code STMS.
  • If there is no Domain Controller already, system will prompt you to create one. When the Transport Domain is created for the first time, following activities happen in the background:
    • Initiation of the Transport Domain / Landscape / Group
    • Creating the user TMSADM
    • Generating the RFC Destinations required for R/3 Configurations, TMSADM is used as the target login user.
    • Creating DOMAIN.CFG file in usr/sap/trans/bin directory – This file contains the TMS configuration and is used by systems and domains for checking existing configurations.

Step 2:Transaction STMS


Step 3: Adding SAP systems to the Transport Domain

  • Log on to SAP systems (to be added in the domain) in client 000 and start transaction STMS.
  • TMS will check the configuration file DOMAIN.CFG and will automatically propose to join the domain (if the domain controller already created). ‘Select’ the proposal and save your entries.
  • For security purpose, system status will still be in ‘waiting’ status, to be included in the transport domain.
  • For complete acceptance, login to Domain Controller System (Client 000) -> STMS -> Overview -> Systems. New system will be visible there. From the menu choose ‘SAP System’ -> Approve.


Step 4:Configuring Transport Routes

  • Transport Routes – are the different routes created by system administrators and are used to transmit changes between the systems in a system group/landscape. There are two types of transport routes:
    • Consolidation (From DEV to QAS) – Transport Layers are used
    • Delivery (From QAS to PRD) – Transport Layers not required
  • Transport Layer – is used to group the changes of similar kinds, for example, changes are done in development objects of same class/category/package, logically should be sent through same transport route. Therefore transport layers are assigned to all objects coming from DEV system. Layers are used in Consolidation routes, however after testing happens in QAS, layers are not used and the changes are moved using single routes towards PRD system.

Package – (formerly known as Development Class) is a way to classify the objects logically belonging to the same category or project. A package can also be seen as an object itself and is assigned with a specific transport layer (in consolidation route), therefore, changes made in any of the development object belonging to a particular Package, will be transmitted towards target system through a designated Transport Layer only, or else the change will be saved as a Local (non-transportable) modification.


STMS Configuration and Transport Route Configuration

1.     After SAP Gui installed. (Development system is always domain controller.)

2.     Logon to Client :000
User: DDIC.
Pwd: ******

3.     Put a T-code (se06) in the command field.

4.     Click standard installation and click post installation (yes)

5.     Put a T-code (STMS) in the command field and give a description then save it.

6.     Click System overview push botten, click extras option from the menu and click Distribute and activate configuration. (Yes)

7.     Back to the stms main screen and click Transport routes from the push botten.

8.     Click Configuration button from the Menu.

9.     Click Distribute and activate from the configuration menu.

10.   Now your sap system is activate in domain controller.

11.   Now logon to QAS system.

12.   Client: 000
User: DDIC
Pwd: ****

13.   Put a T-code (se06). In the command field.

14.   Click standard installation and click post installation (yes)

15.   Put a T-code (STMS) in the command field.

16.   Click other Configuration push butten in the bottom of appear screen.

17.   Give information about development system. Example: tisdev, 00

18.   Then save it. Ok

19.   Now QAS is waiting for include domain controller.

20.   Logon to Development system.

21.   Click system overview push butten.

22.   Click sap system from the menu.

23.   Select the QAS systems.

24.   Click Approve butten from the sap system menu.

25.   Now QAS is member system of domain Controller.  (Development system).

26.   Suppose if you want configuring Production also, following the same step in the QAS.


STMS configuration for a standalone SAP system (local transport domain)

This article answers the below queries

ü  How to perform STMS configuration of an SAP system (local transport domain)?

ü  How to do STMS configuration in the post system refresh steps of SAP system?


Please use user id which has SAP_ALL access and login to 000 client of an SAP system.

Please execute transaction code STMS.

A screen similar to below will appear. Provide the description of the system.
In the above screen, Transport domain name will be displayed by default. (Naming convention would be DOMAIN_SID, where SID stands for system id of the system that is being refreshed)

In our example above SE1 is the system id and DOMAIN_SE1 is the domain.

In this example, am demonstrating on how to configure local transport domain for a standalone SAP system.

Click the Save button in the above screen.

After that SE1 is configured as transport domain and a screen similar to below will appear.

An informational message will be displayed like “You are logged onto the domain controller”.


This completes the STMS configuration (local domain) for a sap system.

Please note that the above configuration is for standalone system. In case you would like to do STMS configuration for a system which is part of another transport domain, a slightly different process is to be followed which will be covered in a different article.

Step by Step for establishing RFC Connection between SAP-BW & Data Service

There are many descriptive documents, which talks about the RFC Connections between Data Services and BW, but I created this blog thinking that it will be much more helpful if the same is provided with appropriate screen shots.


I assume that before creating the RFC Connections, Data Services is installed and your BASIS has already imported SAP delivered Functions in the SAP BW Server. These functions are provided in the form of two transport files.


Following are the steps:

  • Installing Functions on the SAP Server
  • Creating RFC Connection in BO Data Services
  • Creating RFC Connection in BW
  • Install Authorizations


Overview with SAP Provided Diagram

SAP Provided.gif


This blog only will focus on the RFC Connection establishment between 2 servers.


Create RFC Connection in Data Services



1. Logon to Data Services Management Console



2. Go to SAP Connections – > RFC Server Interface




3. Click on RFC Service Interface



4. Select Tab RFC Server Interface Configuration



5. Select Add -> Provider necessary server parameters

Note: In the parameters, the Program ID is the one which you need to provide. You

can provide any name based on the naming convention your client prefers.


Ex: Program_ID: XX_YYYY



Click on Apply.

The program ID which you are going to define will be used in SAP BW Server ( or any other SAP Server).


6. Select Tab RFC Server Interface Status.

Now you should be able to view the Server Interface name starts with the Program ID which you have provided.


Select the check box for the Server Interface and click on Start. If all the parameters are appropriate then the server instance will start with a Green status.



With the above step you have created the RFC connection in Data Services. Now remember the Program_ID.  


In the next two steps I will talk about the RFC Connection in SAP BW.


7. Now logon into the SAP BW.

Use Transaction SM59 to create the RFC Connection.



Expand TCP/IP Connection, then Select Create.




The Program ID is the one which you have already created in DS Management Console.

Then once you click the Connection Test button you should see the message Connection Successful.


At this point your RFC Connection configuration is ready in both the systems ( DS & BW) and both the systems are ready for data transfer.


How to Configure and Test RFC.

This tutorial is divided into 4 sections

  1. Setup a RFC connection
  2. Trusted RFC connection
  3. Testing a RFC connection
  4. Error Resolution

Procedure to setup an RFC connection:

Enter Transaction Code SM59


In the SM59 screen, you can navigate through already created RFCs connection with the help of option tree, which is a menu based method to organize all the connections on the basis of categories.

Click the  ‘CREATE‘ button. In the next screen , Enter –

  • RFC Destination – Name of Destination (could be Target System ID or anything relevant)
  • Connection Type – here we choose one of the types (as explained previously) of RFC connections as per requirements.
  • Description – This is a short informative description, probably to explain the purpose of connection.

After you‘SAVE’the connection, the system will take you to ‘Technical Settings’ tab, where we provide the following information:

  • Target Host– Here we provide the complete hostname or IP address of the target system.
  • System Number – This is the system number of the target SAP system.
  • Click Save

In the ‘Logon and Security’  Tab, Enter Target System information

  • Language – As per the target system’s language
  • Client – In SAP we never logon to a system, there has to be a particular client always, therefore we need to specify client number here for correct execution.
  • User ID and Password – preferably not to be your own login ID, there should be some generic ID so that the connection should not be affected by constantly changing end-user IDs or passwords. Mostly, a user of type ‘System’ or ‘Communication’ is used here. Please note that this is the User ID for the target system and not the source system where we are creating this connection.

Click Save. RFC connection is ready for use
Note: By default a connection is defined as aRFC. To define a connection as tRFC or qRFC go to Menu Bar -> Destination aRFC options / tRFC options ; provide inputs as per requirements. To define qRFC , use the special options tab.

Trusted RFC connection

There is an option to make the RFC connection as ‘Trusted’. Once selected, the calling (trusted) system doesn’t require a password to connect with target (trusting) system.

Following are the advantages for using trusted channels:

  • Cross-system Single-Sign-On facility
  • Password does not need to be sent across the network
  • Timeout mechanism for the logon data prevents misuse.
  • Prevents the mishandling of logon data because of the time-out mechanism.
  • User-specific logon details of the calling/trusted system is checked.

The RFC users must have the required authorizations in the trusting system (authorization object S_RFCACL).Trusted connections are mostly used to connect SAP Solution Manager Systems with other SAP systems (satellites)

Testing the RFC Connection

After the RFCs are created (or sometimes in case of already existing RFCs) we need to test, whether the connection is established successfully or not.

As shown above we go to SM59 to choose the RFC connection to be tested and then we expand drop down menu – “Utilities->Test->…“. We have three options:

Connection test -> This attempts to make a connection with remote system and hence validates IP address / Hostname and other connection details. If both systems are not able to connect, it throws an error. On success it displays the table with response times. This test is just to check if the calling system is able to reach the remote system.


Authorization Test -> It is used to validate the User ID and Password (provided under ‘logon and security’ tab for the target system) and also the authorizations that are provided. If test is successful, then same screen will appear as shown above for the connection test.

Unicode Test -> It is to check if the Target system is a Unicode or not.

Remote Logon –>This is also a kind of connection test, in which a new session of the target system is opened, and we need to specify a login ID and Password (if not already mentioned under ‘Logon and Security’ tab). If the user is of type ‘Dialog’ then a dialog session is created. To justify the successful connection test, output will be the response times for the communication packets, else error message will appear.

What went wrong?

If somehow the RFC connection is not established successfully, we can check the logs (to analyze the issue) at OS level in the ‘WORK’ director. There we can find the log files with the naming convention as “dev_rfc<sequence no.>” and the error description can be read from such files.

SAP Profile Parameter Management (RZ10)

The default path of the profile is usr\sap\sid\sys\profile

Start Profile: It will start SAPR3 services in OS .The file existing is start_DEVMBGS00_sid

Default Profile: It contains common settings for R3 system like message server name, gateway services. The file that contains this profile is default_PFl

Instance Profile: It contains the parameter which is existing in the particular instance. The file that contains this profile is sid_DVEMBGS00_sid

Only one default profile exists in the R3 system. If any stand by system is also available.

Editing Profiles:-

1 By using RZ10
2 By using the VI editor command like vi
3 SAP pad utility.

We need to import the profiles from OS level SAP level by using RZ10

RZ10 Utilities…..> Import Profiles Of active Servers

Modifying of Profiles

Editing profiles is having 3 modes
1 Administration data
2Basic Maintenance
3External Maintenance

Administration data: – In administration data the path of profile, last modified, modified on which date, profile activated by which user

Basic Maintenance: Update server name, gateway

Extended maintenance: To add any parameters

To increase the dialog work process, need to change the parameter
Parameter Parameter Value

Rdisp/wp_no_dia 5 to 6

Rdisp/wp_n0_btc 2

Reports are run on SE38

Rsparam àto display the parameters available in SAP R3 àExecuted

To add a new parameter

RZ10à Select the profile which needs to be modified à Extended Malignance àand click on change

Click on create button parameter

Mention the parameter name

Parameter value

Click on copy

Click on Back

Save the changes


Save change

Activate the profiles

1 What ever modifications we do by using RZ10 we need to restart the SAP server

2 Version number is the number which will change according to the modifications made on the profile.

3 The parameter value of the instance profile overwrites default profile value.

4 If any changes are made in R3 server the changes must be made on standby server.

RZ 11 Maintain profile Parameter.

Parameter Name display.


SAP Post Installation Step


Post Installation Steps

After Installing R/3 into a new system, Basis has to perform some post Installation steps before handing over to end users for operation. Post Installation steps make sure that System is ready, properly configured, Tuned and take load of user requests.

Below are some standard steps which has to perform immediately after the installation is finished.

  • Login to SAP system using DDIC/000
  • Execute SICK/ SM28 to check for any Installation error , If anything is reported then trouble shoot those errors.
  • Apply SAP license through SLICENSE
  • Execute SE06 , Select Standard Installation and click on execute Perform Post Installation Steps. Click yes on each next screen.
  • Execute STMS , to configure TMS configuration system. If there is no Domain controller in organization then configure this new system as DC.

Transport Management System ( TMS ) Configuration In SAP Step By Step With Snap Shots

  • Login as SAP*/000
  • Execute RZ10 -> Utilities -> Import profiles -> Of Active Servers
  • check the system log in SM21
  • Check any dumps in ST22

Common ABAP Dumps (ST22) and troubleshooting in SAP

  • Execute SCC4 -> Click on change button -> Confirm the warning and click on new entries to create a new client.

Client Creation (SCC4) & Logical system(BD54) in SAP

  • login to new client to perform a client copy using SAP*/<new client number>/PASS

Which are Client Copy Methods In SAP

  • Set your default client in SAP

How to Define Default Client in SAP

  • Make changes to dialog process and background if you need to change than default one.

Profile parameters to increase/decrease the number of workprocesses

  • Define operation mode through RZ04 & SM63

How to setup operation modes in SAP ( RZ04 & SM63 )

  • Create one or two super users using SU01 with profiles SAP_ALL and SAP_NEW

Creating a SAP Account / User Creation in SAP (SU01)

  • printer configuration (if required )

How to configure Frontend printer in SAP ( SPAD )

  • Schedule SAP Standard Background jobs through SM36

How to schedule standard jobs in SAP

  • Configure Web GUI

SAP Web Gui Configuration Step By Step With Snap Shots


  • Create login page message

login page message in SAP ( SE61 ) with icon from ( SA38 )


  • Display your company logo on initial screen of SAP

How to Put my own / Company logo in the SAP initial screen ( SMW0 & SM30 )


  • Disable import all option (this depends on organizations requirement )

How to disable Import All option from STMS in SAP


  • Give protection to special users

How To Protect Special Users In SAP

  • Stop and Start SAP R/3 for profile parameter to be in effect.
  • Upgrade the kernel to the latest level
  • Upgrade the SPAM version to latest level

Now system is ready to login and work for developers and administrator


After Installing R/3 into a new system, Basis has to perform some post Installation steps before handing over to end users for operation. Post Installation steps make sure that System is ready, properly configured, Tuned and take load of user requests.

Below are some standard steps which has to perform immediately after the installation is finished.

PART 1:-

  1. Login to SAP system using DDIC/000
  1. Execute SE06 , Select Standard Installation and click on execute Perform Post Installation Steps. Click yes on each next screen.
  1. Execute STMS , to configure TMS configuration system. If there is no Domain controller in organization then configure this new system as DC.
  1. Execute SICK to check for any Installation error , If anything is reported then trouble shoot those errors.
  1. Execute sapdba or brtools to check/increase tablespace size if any is >90%
  1. IF sapdba then check the tablespace utilization by selecting c. Tablespace Adminitration – c. Free space fragmentation of Tablespaces
  1. List out all the tablespaces filled above 90%
  1. Add datafiles to corresponding tablespaces to increase the tablespace size and bring the utilization of tablespaces below 80%
  1. Login as SAP*/000
  1. Execute SCC4 -> Click on change button -> Confirm the warning and click on new entries to create a new client.
  1. Execute RZ10 -> Utilities -> Import profiles -> Of Active Servers
  1. check the system log in SM21
  1. Check any dumps in ST22
  1. Login at command prompt using ora<sid> or <SID>adm

PART 2:-

  1. login to new client to perform a client copy using SAP*/<new client number>/PASS
  1. Perform local client copy procedure to copy new client from 000 client.
  1. Once client copy is over , login to new client using SAP* and password of SAP* which was

 used in client 000

  1. Execute RZ10 -> Select Instance Profile -> check Extended maint -> click on change.
  1. Add parameter login/system_client parameter to make new <client_number> as default client to login.
  1. Make changes to dialog process and background if you need to change than default one.
  1. Save the profile and activate it.
  1. Create one or two super users using SU01 with profiles SAP_ALL and SAP_NEW
  1. Create some developer users if you can, else leave it.
  1. Stop and Start SAP R/3 for profile parameter to be in effect.
  1. Upgrade the kernel to the latest level
  1. Upgrade the SPAM version to latest level
  1. Apply latest support pack to components SAP_BASIS, SAP_ABAP, SAP_APPL and some other components if it is required.
  1. Follow the kernel, SPAM and support pack application methods
  1. Now system is ready to login and work for developers and administrator
  1. Keep on changing the parameters , system configuration as per requirement later.
  1. Run SGEN to regenerate the objects . In this process SAP keeps all the required objects access in SAP buffer. So that transaction accessing becomes faster.

var _gaq = _gaq || []; _gaq.push([‘_setAccount’, ‘UA-37077126-1’]); _gaq.push([‘_trackPageview’]); (function() { var ga = document.createElement(‘script’); ga.type = ‘text/javascript’; ga.async = true; ga.src = (‘https:’ == document.location.protocol ? ‘https://ssl&#8217; : ‘http://www&#8217😉 + ‘’; var s = document.getElementsByTagName(‘script’)[0]; s.parentNode.insertBefore(ga, s); })();




SAP Tickets – What Is That?

Handling tickets is called Issue Tracking system. The errors or bugs forwarded by the end user to the support team are prioritized under three severity High, Medium and Low. Each and every severity as got its time limits before that we have to fix the error.The main job of the supporting consultant is to provide assistance online to the customer or the organization where SAP is already implemented for which the person should be very strong in the subject and the process which are implemented in SAP at the client side to understand, to analyze, to actuate and to give the right solution in right time. This is the job of the support consultant.

The open issues or the tickets (problems) which are raised is taken care of on priority basis by the support team consultants.

The work process in support projects are given below for your reference.

1.  The customer or the end user logs a call through any open tickets handling tool or by mail (RADIX).

2.  Each one of the support team is a part of support group.

3. Whenever a customer logs a call he /she  has to mention to which work group (by name).

4. Once the calls came to the work group the support consultant or the team need to send an IR (Initial Response) to the user depending upon the priority of the calls. (Top, High, Medium, Low, None)

5. Then the error is fixed, debugged by the support consultant or the team. Then after testing properly by generating TR (Transport Request through the basis administrator)

6. Then it is informed to the end user/customer/super user about the changes which have moved to the production server by CTS process.

These are the process.  In summary, what I understand is that if any configuration or customization is required to solve the issue, then the consultant have to work on DEV Client, then the end user will test it in the QA client and after approval the BASIS consultant has to transport it to the PRODUCTION client.

An example:

Tickets in SAP SD can be considered as the problems which the end user or the employee in the company face while working on R/3.  Tickets usually occur during the implementation or after the implementation of the project.  There can be numerous problem which can occur in the production support and a person who is working in the support has to resolve those tickets in the limited duration, every open tickets has the particular deadline alert so your responsibility is to finish it before that deadline.

To begin with , we should give “TICKET” to you for not knowing it.

Here is an e.g. of a ticket raise:

End user is not able to
1. Create Sales order for a customer from a New plant , since shipping point determination is not happened . (Without Shipping point the document becomes INCOMPLETE and he will not be able to proceed further like DELIVERY, BILLING).

He raises a ticket and the priority is set in one of the below:
1. Low  2. Medium  3. High.

Now you need to solve this ticket. You would analyze the problem and identify that the SP configuration has to be done for the new plant.

You would request a transport for DEV CLIENT to BASIS. You do the change and Request one more Transport to BASIS for QA client. The End user will test the same by creating a sales order for the new plant and approve it.

Finally, you request a transport to move the changes to PRODUCTION. Once the change is deployed in production the TICKET is closed.  What I have given is a small example. You would get some real issues with severity HIGH in your day-day support.


How to monitor SAP system and do performance checks

Why Daily Basic checks / System Monitoring?



What is System Monitoring?


System monitoring is a daily routine activity and this document provides a systematic step by step procedure for Server Monitoring. It gives an overview of technical aspects and concepts for proactive system monitoring. Few of them are:

  • Checking Application Servers.
  • Monitoring System wide Work Processes.
  • Monitoring Work Processes for Individual Instances.
  • Monitoring Lock Entries.
  • CPU Utilization
  • Available Space in Database.
  • Monitoring Update Processes.
  • Monitoring System Log.
  • Buffer Statistics

Some others are:

  • Monitoring Batch Jobs
  • Spool Request Monitoring.
  • Number of Print Requests
  • ABAP Dump Analysis.
  • Database Performance Monitor.
  • Database Check.
  • Monitoring Application Users.


How do we do monitor a SAP System ?


Checking Application Servers (SM51)


This transaction is used to check all active application servers.


Here you can see which services or work processes are configured in each instance.

Monitoring Work Processes for Individual Instances SM50:


Displays all running, waiting, stopped and PRIV processes related to a particular instance. Under this step we check all the processes; the process status should always be waiting or running. If any process is having status other than waiting or running we need to check that particular process and report accordingly.


This transaction displays lot of information like:

  1. Status of Workprocess (whether its occupied or not)
  2. If the workprocess is running, you may be able to see the action taken by it in Action column.
  3. You can which table is being worked upon

Some of typical problems:

  • User take long time to log on/not able to logon/online transaction very slow. This could be the result of the DIA work processes are fully utilized. There could be also the result of long running jobs (red indicator under the Time column). If necessary you can cancel the session by selecting the jobs then go to Process>Cancel Without core. This will cancel the job and release the work process for other user/process
  • Some users may have PRIV status under Reason column. This could be that the user transaction is so big that it requires more memory. When this happen the DIA work process will be ‘owned’ by the user and will not let other users to use. If this happens, check with the user and if possible run the job as a background job.
  • If there is a long print job on SPO work process, investigate the problem. It could be a problem related to the print server or printer.

Monitoring System wide Work Processes (SM66)


By checking the work process load using the global work process overview, we can quickly investigate the potential cause of a system performance problem.

Monitor the work process load on all active instances across the system

Using the Global Work Process Overview screen, we can see at a glance:

  • The status of each application server
  • The reason why it is not running
  • Whether it has been restarted
  • The CPU and request run time
  • The user who has logged on and the client that they logged on to
  • The report that is running

Monitor Application User (AL08 and SM04)

This transaction displays all the users of active instances.


Monitoring Update Processes (SM13)


Execute Transaction SM13 and put ‘*‘ in the field USER and click on button.


If there are no long pending updates records or no updates are going on then this queue will be empty as shown in the below screen shot.


But, if the Update is not active then find the below information:

  • Is the update active, if not, was it deactivated by system or by user?
    • Click on button and get the information.
    • Click on button and get the below information:
  • Is any update cancelled?
  • Is there a long queue of pending updates older than 10 minutes?


Monitoring Lock Entries (SM12)

Execute Transaction SM12 and put ‘*’ in the field User Name


SAP provides a locking mechanism to prevent other users from changing the record that you are working on. In some situations, locks are not released. This could happen if the users are cut off i.e. due to network problem before they are able to release the lock.

These old locks need to be cleared or it could prevent access or changes to the records.

We can use lock statistics to monitor the locks that are set in the system. We record only those lock entries which are having date time stamp of previous day.

Monitoring System Log (SM21)

We can use the log to pinpoint and rectify errors occurring in the system and its environment.
We check log for the previous day with the following selection/option:

  • Enter Date and time.
  • Select Radio Button Problems and Warnings
  • Press Reread System Log.


Tune Summary (ST02)

Step 1: Go to ST02 to check the Tune summary.

Step 2: If you see any red values, in SWAPS, double –click the same.


Step 3: In the below screen click on the tab ‘Current Parameters


Step 4: Note down the value and the Profile parameters


Step 5: Go to RZ10 (to change the Profile parameter values)

Step 6: Save the changes.

Step 7: Restart the server to take the new changes effect.

CPU Utilization (ST06)



Idle CPU utilization rate must be 60-65%,if it exceeds the value then we must start checking at least below things:

  • Run OS level commands – top and check which processes are taking most resources.
  • Go to SM50 or SM66 .Check for any long running jobs or any long update queries being run.
  • Go to SM12 and check lock entries
  • Go to SM13 and check Update active status.
  • Check for the errors in SM21.

ABAP Dumps (ST22)

Here we check for previous day’s dumps


Spool Request Monitoring (SP01)

For spool request monitoring, execute SP01 and select as below:

  • Put ‘*’ in the field Created By
  • Click on execute button.



Here we record only those requests which are terminated with problems.


Monitoring Batch Jobs (SM37)


For Monitoring background jobs, execute SM37 and select as below:

  • Put ‘*’ in the field User Name and Job name
  • In Job status, select: Scheduled, Cancelled, Released and Finished requests.


Transactional RFC Administration (SM58)


Transactional RFC (tRFC, also originally known as asynchronous RFC) is an asynchronous communication method which executes the called function module in the RFC server only once.


We need to select the display period for which we want to view the tRFCs and then select ‘*’ in the username field to view all the calls which have not be executed correctly or waiting in queue.

QRFC Administration (Outbound Queue-SMQ1)

We should specify the client name over here and see if there any outgoing qRFCs in waiting or error state.

QRFC Administration (Inbound Queue-SMQ2)

We should specify the client name over here and see if there any incoming qRFCs in waiting or error state.

Database Administration (DB02)


After you select Current Sizes on the first screen we come to the below screen which shows us the current status of all the tablespaces in the system.


If any of the tablespace is more than 95% and the autoextent is off then we need to add a new datafile so that the database is not full.
We can also determine the history of tablespaces.


We can select Months, Weeks or Days over here to see the changes which takes place in a tablespace.
We can determine the growth of tablespace by analyzing these values.



Database Backup logs (DB12)

From this transaction we could determine when the last successful backup of the system was. We can review the previous day’s backups and see if everything was fine or not.

We can also review the redo log files and see whether redo log backup was successful or not.


Quick Review

Daily Monitoring Tasks

  1. Critical tasks
  2. SAP System
  3. Database


Critical tasks

  No Task Transaction Procedure / Remark


Check that the R/3System is up. Log onto the R/3 System


Check that daily backups executed without errors DB12 Check database backup.


SAP System

No Task Transaction Procedure / Remark
1 Check that all application servers are up. SM51 Check that all servers are up.
2 Check work processes (started from SM51). SM50 All work processes with a “running” or a “waiting” status
3 Global Work Process overview SM66 Check no work process is running more than 1800 second
3 Look for any failed updates (update terminates). SM13
  • Set date to one day ago
  • Enter * in the user ID
  • Set to “all” updates Check for lines with “Err.”
4 Check system log. SM21 Set date and time to before the last log review. Check for:

  • Errors
  • Warnings
  • Security messages
  • Database problems
5 Review for cancelled jobs. SM37 Enter an asterisk (*) in User ID.Verify that all critical jobs were successful.
6 Check for “old” locks. SM12 Enter an asterisk (*) for the user ID.
7 Check for users on the system. SM04AL08 Review for an unknown or different user ID and terminal.This task should be done several times a day.
8 Check for spool problems. SP01 Enter an asterisk (*) for Created ByLook for spool jobs that have been “In process” for over an hour.
9 Check job log SM37 Check for:

  • New jobs
  • Incorrect jobs


Review and resolve dumps. ST22 Look for an excessive number of dumps. Look for dumps of an unusual nature.
11 Review buffer statistics. ST02 Look for swaps.



  No Task Transaction Procedure /  Remark


Review error log for problems. ST04


Database GrowthMissing Indexes DB02 If tablespace is used more than 90 % add new data file to itRebuild the Missing Indexes


Database Statistics log DB13


SAP process monitor tcodes (Transaction Codes)

VT20 – Overall Shipment process monitor Logistics Execution – TransportationSPIMprocess monitoring: Meta Data Basis – Process Monitoring InfrastructureSPIOprocess monitoring Overview Basis – Process Monitoring InfrastructurePOC_MONITORprocess monitor – SAP GUI Cross Application – Process Orchestration for Built-In ProcessesRSPCMmonitor daily process chains BW – Data StagingRZ20 – CCMS monitoring Basis – MonitoringSXMB_MONI_BPEprocess Engine – monitoring Basis – Runtime Workbench/MonitoringBBP_MON_SC – EBP monitor Shopping Cart SRM – Enterprise BuyerRSMON – Administration – DW Workbench BW – Data Warehousing WorkbenchMIGO – Goods Movement MM – Inventory ManagementSE38 – ABAP Editor Basis – ABAP EditorME21N – Create Purchase Order MM – PurchasingSE16 – Data Browser Basis – Workbench UtilitiesSM37 – Overview of job selection Basis – Background ProcessingRSA1 – Modeling – DW Workbench BW – Data Warehousing WorkbenchSE37 – ABAP Function Modules Basis – Function BuilderST22 – ABAP Dump Analysis Basis – Syntax, Compiler, RuntimeSM50 – Work process Overview Basis – Client/Server TechnologyRSMO – Data Load monitor Start BW – Data StagingRSPCprocess Chain Maintenance BW – Data StagingJ1BNFE – NF-e/CT-e monitor FI – LocalizationLL01 – Warehouse Activity monitor Logistics Execution – Warehouse ManagementJ1B3N – Display Nota Fiscal – Enjoy FI – LocalizationBWCCMS – CCMS monitor for BW BW – Data StagingLRF1 – RF monitor, Active Logistics Execution – Warehouse ManagementDB16 – Display DB Check Results Basis – CCMS / Database Monitors for OracleST06N – Operating System monitor CHANGERUNMONI – Call Change Run monitor BW – Data StagingCASA – BP Control: Match Codes FI – Contract AccountsS_AE2_89000019 – CRM_REP_ACT1 CRM – Business TransactionsCRM_SRV_REPORT – Transactn monitor: Service processes CRM – Service OrderSWI2_DEAD – Work Items with monitored Deadlines Basis – SAP Business WorkflowSWDP – Show Graphical Workflow Log Basis – SAP Business WorkflowCATR – Reorganize Interface Tables Cross Application – Time SheetEMASN – IDoc monitor for Inb. Ship. Notific. IS – Monitors for AutomotivePOC_CUSTOMIZING – Customizing process Observer Cross Application – Process Orchestration for Built-In ProcessesPOC_VIEWERprocess Viewer – SAP GUI Cross Application – Process Orchestration for Built-In Processes


Using DPMON: Work Process Monitoring

At times when you are unable to login to R/3 system for various reasons, a tool named dpmon is used to get the process overview of an instance in text mode

DPMON is Dispatcher Monitor (DPMON.exe), which is located under /usr/sap//sys/exe/run. DPMON check the status of the work processes or the dispatcher queue at operating system level.

First go to the location /usr/sap/SID/SYS/profile

1. Log on at operating system level as the adm. Use command cdpro to list down the available profile.

Start the utility program ‘dpmon’. Use the profile used by the application server

example command: dpmon pf=

Once executed dpmon it displays this screen

Type ‘m’ to display all the menu available for the dpmon


displays the screen like this

Now select P to find the work process admin tables

Check the work process which you want kill and select the 1 option to kill the same.



 Scheduling Background Jobs


You can define and schedule background jobs in two ways from the Job Overview:

  • Directly from Transaction SM36. This is best for users already familiar with background job scheduling.
  • The Job Scheduling Wizard. This is best for users unfamiliar with SAP background job scheduling. To use the Job Wizard, start from Transaction SM36, and either select Goto Wizard version or simply use the Job Wizard button.


  1. Call Transaction SM36 or choose CCMS Jobs Definition.
  2. Assign a job name. Decide on a name for the job you are defining and enter it in the Job Name field.
  3. Set the job’s priority, or “Job Class”:
  • High priority: Class A
  • Medium priority: Class B
  • Low priority: Class C
  1. In the Target server field, indicate whether to use system load balancing.
  • For the system to use system load balancing to automatically select the most efficient application server to use at the moment, leave this field empty.
  • To use a particular application server to run the job, enter a specific target server.
  1. If spool requests generated by this job are to be sent to someone as email, specify the email address. Choose the Spool list recipient button.
  2. Define when the job is to start by choosing Start Condition and completing the appropriate selections. If the job is to repeat, or be periodic, check the box at the bottom of this screen.
  3. Define the job’s steps by choosing Step, then specify the ABAP program, external command, or external program to be used for each step.
  4. Save the fully defined job to submit it to the background processing system.
  5. When you need to modify, reschedule, or otherwise manipulate a job after you’ve scheduled it the first time, you’ll manage jobs from the Job Overview.

Note: Release the job so that it can run. No job, even those scheduled for immediate processing, can run without first being released.




System Informmation

There are a lot of transactions to do that.

Most of the time I use the following ones:
ST06N – For system memory, disks, CPU usages, LAN, filesystems
ST04 – For the DB performance
DB02 – For the database statistics
ST02 – For the SAP buffers.
SM51 – Overview of SAP server processes

In table TSTC you can find more transactions.
Check transactions that starts with ST* or DB* and for specific SAP monitoring SM* tcodes.




Load Balancing in SAP

There are 2 instances here, one is CI and other is Dialog instance.

I am a little confused on what basis the users are logged on each instances.
One what basis the message server determines and directs the logon of a perticular user to that perticular instance?

As per my analysis, a report named RSRZLLG0 that provides message server information for load balancing is not scheduled.
Also in SMLG there are 2 groups created for both instances with same group name, and the thresold values for Response time and users are being maintained in each of them.


Also, the CPU Idle time for the central instance is always low (below 30% at times), and I can see more users at the CI than the dialog.


What is SAP Logon Load Balancing?

  • SAP R/3 Logon load balancing enables a SAP Basis admin to create various log on groups.
  • Logon groups are logical groups of users that can be assigned to one or more SAP instances.
  • A group GroupX, Once assigned to an instance I, all users who login using that group in the SAP log on pad client, will automatically login into the instance.
  • If you have two instances I1, I2 then you can create two groups GroupX and GroupY and assign them to I1 and I2 respectively. You can also divide your users according to some criterion (such as Floor1 thru 5 will use GroupX and floor 6 thru 10 will use GroupY) and ask them to use only those groups.
  • In SAP Logon Pad, you can create and delete logon group entries, remove instances from groups, and delete entire logon groups.
  • When you call transaction SMLG, the CCMS: Maintain Logon Groups screen shows a table with entries for logon groups and the associated instances. An entry in this table, which is characterized by an instance and a logon group, is known as an assignment.
  • A logon group to which multiple instances belong therefore consists of multiple assignments in this table, where an assignment contains one instance in each case.

How to Create a Logon Group or Assigning an Instance to a Logon Group

  • Choose CCMS® Configuration logon Groups, or call transaction SMLG.
  • Choose (Create Assignment), and specify the desired name of the logon group in the Logon Group input field. Enter the name of the desired Instance that is to belong to the logon group. The logon group SPACE is reserved for SAP; therefore, do not use this name.
  • Repeat the last step until you have entered all instances that are to belong to the logon group.
  • You can assign one group to one or more instances.
  • Save your changes.

How to Delete a Logon Group or Removing an Instance from a Logon Group

  • Choose CCMS® Configuration ® Logon Groups, or call transaction SMLG.
  • Select any assignment for the logon group that you want to delete or from which you want to remove an instance.
  • To remove an instance from the selected logon group, choose Remove Instance, enter the desired instance on the next screen, and confirm your choice by choosing (Delete).
  • To delete the desired log on group, choose Delete Group and confirm your choice by choosing (Delete) on the next screen.
  • Save your changes.

Changing Properties of an Assignment, a Logon Group, or an Instance

  • Choose CCMS® Configuration ® Logon Groups, or call transaction SMLG.
  • To change the properties of an assignment, double-click the assignment, and switch to the Properties tab page.
  • You can change the following properties:
  • IP address of the application server: Only enter a value in this field if the application server associated with the instance needs to be addressed by the front end with a different IP address to the one used for application server-internal communication. This value applies only for the selected assignment.
  • Settings for external RFC call: You can use this indicator to determine whether logon using an external RFC connection is to be permitted. This value applies to the selected logon group.
  • Threshold values for dialog response time and number of users logged on

How it all works

  • If you log on using a logon group, the logon is automatically performed using the instance of the group that currently has the best dialog quality. This quality is a key figure that is calculated from the number of users logged on and the average dialog response time.
  • To allow the different prerequisites of different instance to be taken into account in this calculation, you can set threshold values for the dialog response time and the number of users yourself. The larger the actual values for response time and the number of users are in comparison to the threshold values set, the lower the quality. These figures apply for the selected instance.
  • The values for Response Time and Users are not absolute limits, but rather thresholds. Even if the current value for response time or number of users is higher than this threshold value, it is possible to log on to another instance. The threshold values only influence the calculation of the current logon server of the logon groups.
  • You can use a preview to see how the settings of the threshold values can affect the quality calculation, based on the current performance data. Choose test to do this. In a logon group, the instance with the highest quality key figure is always selected for the logon.
  • Choose Copy, and save your changes.



SAP Load Balancing and Work Processes Troubleshoot

The benefit of segregating user groups by line-of-business (using logon groups) is related to the point that groups of users (like SD users or HR users, for example) tend to use the same sets of data.  They (generally) work with the same groups of tables and hit the same indexes using the same programs (transactions).

So, if you can group all of the users hitting the same tables, onto (or one set of) App server(s), then you can tune the App server buffers to a much greater extent.  If the FI users (generally) never hit against the HR tables then the App servers in the FI group don’t (generally) have to buffer any HR data.  That leaves you free to make memory and buffer adjustments to a more drastic extent, because you don’t have to worry (as much) about screwing the HR users (as an example), when you’re adjusting the FI server group.

So, (in opinion only) you should start with a buffer hit ratio analysis / DB table & index access analysis (by user group) to see where you would get the best benefit from this kind of setup.  If you don’t have this kind of info, then creating logon groups by line-of-business may have no benefit (or worst case, may make performance degrade for the group with the highest load %).  You need some historical information to base your decision on, for how to best split the users up.

You may find that 50% of the load is from the SD users and so you may need one group for them (with 3 App servers in it) and one other group for everyone else (with the other 3).

The logon group(s) will have to be referenced by SAP GUI, so SAP GUI (or saplogon.ini + maybe the services file, only) will have to change to accomodate any new groups you create in SMLG.  Also consider that there’s variables for time-of-day (load varies by time-of-day) and op-mode switches (resources vary by op-mode).
All Work process are running? What will be our action?

Are all the work processes (dia,btc,enq,upd,up2,spo) running or just all the dialog work processes?

If all the work processes are running, then you may want to look at SM12 (or is SM13?) and see if updates are disabled.  If they are, look at the alert log (if it’s an Oracle database) and see if you have any space related errors (e.g. ORA-01653 or ORA-01654).  If you do, add a datafile or raw device file to the applicable tablespace and then, re-enable updates in SM12.

If only all the dialog work processes are running, there are several possible causes.  First, look to see if there’s a number in the Semaphore column in SM50 or dpmon.  If there is, click once on one of the numbers in the Semaphore column to select it and then, press F1 (help) to get a list of Semaphores.  Then, search OSS notes and, hopefully, you’ll find a note that will tell you how to fix the problem.

If it’s not a semaphore (or sometimes if it is), use vmstat on UNIX or task manager on Windows to see if the operating system is running short on memory which would cause it to swap.  In vmstat, the free column (which is in 4k pages on most UNIX derivatives) will be consistently 5MB or so and the pi and/or po columns will have a non- zero value.  The %idle column in the cpu or proc section will be 0 or a very low single digit while the sys column will be a very high double-digit number because the operating system is having to swap programs out to disk and in from disk before it can execute them.

In task manager, look at free memory in the physical memory section under the performance tab.  If it’s 10MB or 15MB (I think), then the operating system will be swapping.

Usually, when all the dialog work processes are running, you won’t be able to log in via SAPgui and will need to execute the dpmon utility at the commandline level.  The procedure is basically the same on UNIX and Windows.


telnet to server and login as sidadm user.
cd to /sapmnt/SID/profile directory
execute “dpmon pf=SID_hostname_SYSNR” (e.g. PRD_hercules_DVEGMS00) select option “m” and then, option “l”

On Windows:

Click on START, then RUN
Type “cmd” and press enter
change to drive where profile directory resides (e.g. f:)
cd to \sapmnt\SID\profile
execute “dpmon pf=SID_hostname_SYSNR” (e.g. PRD_zeus_DVEGMS00) select option “m”

and then, option “l”

On both operating systems, you’ll see a screen that looks like what you see in SM50.  Depending on what you see here, will depend on what you do next, but checking the developer trace files (e.g. dev_disp) in the work directory (e.g. /usr/sap/SID/DVEGMS00/work) is never a bad idea.


Local Client Creation

  • We can create a client using the transaction SCC4. Mention the client name with the appropriate selection.

Note that SAP delivers the software with standard clients 000 and 001. You may not work in client 000, but may use client 001. However, SAP recommends that you begin SAP System implementation by creating a new client as a copy of client 000.

  • To Copy a client
    • A local client copy copies between clients within the same SAP System.
    • A remote client copy allows you to copy between clients in different SAP Systems.You can use a remote client copy to, for example, transport client-dependent as well client-independent Customizing data between SAP Systems.
    • A remote client copy proceeds in the same way as a local copy, but sends the data through a remote function call (RFC) connection to the target client.
    • A remote client copy is easy to use, and does not require file system space on operating system level.
    • The limitations of a remote client copy are as follows:

ü A remote client copy does not create a file at operating system level, so there is no “hard copy” of the client to be copied. Therefore, the same, identical client copy cannot be duplicated at a later date.

  • To delete a client from within SAP System:
    • Log on to the client to be deleted.
    • Use the menu option use Transaction code SCC5 or from the SAP System initial screen choose Tools _ Administration _ Administration _ Client admin _ Special functions _Delete client.
    • Start the deletion of the client, preferably using background processing.
    • When you delete a client entry from table T000 with client maintenance (TransactionSCC4), you can no longer log on to the client or update it using change requests. The deletion process, however, does not eliminate the data belong to the client. This means the client-dependent data remains in your SAP System, occupying space in the database. Therefore, to eliminate an SAP client entirely, that is, to delete both the client and the client-dependent data, use the client delete functionality (Transaction SCC5).
    • Deleting a client entry with client maintenance (Transaction SCC4) allows you to temporarily lock the client. The deletion procedure preserves the data for the client but prevents users from logging on to the client or accessing the data belonging to the client. To restore the client and allow logon, recreate the client entry using client maintenance.
    • The amount of time required for the deletion of a client can be reduced by performing the deletion using parallel processes.


Client Creation (SCC4) & Logical system(BD54) in SAP

Step1… Create Logical system for new client

What is logical System in SAP ?

Logical system enables the system to recognize the target system as an RFC destination. Logical system is required for communication between systems within the landscape. You need a logical system when you create a new client in SAP
T-Code to create Logical System in SAP is BD54
Logical System is transportable.

How to create Logical System in SAP ?

1. Run transaction code BD54. Click on pencil icon to covert from display to change view. Then, click on New Entries.

2. Use the following naming convention for the logical system names, <System ID>CLNT<Client>. Save your entries, which are included in a transport request. Create the logical system name for the central system in all child systems.

for ex. if your new client no is 500 and SID(System ID) if P10 then logical system will be,


Step2… Create New client in SCC4

How to create a new SAP client

You have finished your SAP Installation  and want to use your SAP system as your users work area.
Before you are able to use it you have to create a client in your SAP system. It’s part of SAP Basis tasks to create SAP client.

A SAP client is and independently accountable business unit. Each client is identified by a three-figure number. In the standard system, SAP delivers the following pre-configured clients:

  • 000 : for administration purposes and as a template for additional clients
  • 001: for test purposes and as a template for additional clients.
  • 066: for SAP Remote Services

Users and their configurations can only work in their configuration’s assigned client. Creating your own clients is one of the first steps in customizing a SAP system.

A client is created in two steps:
1. Make the new client known to the SAP system, and make important basic settings.
2. Fills the client with data.
After that you can use your SAP client.

1. To make a new client from SAP Menu -> Tools -> Administration -> Administration -> Client Administration or SAP Transaction code SCC4 – Client Maintenance.

SAP Client Maintenance – SCC4
  1. It will bring you to initial screen of SAP Client Maintenance. Click New Entry to make new SAP Client, choose Continue
Create new SAP Client
  1. Enter your Client Data & logical system which we have created in step1
Client Option
  1. Save and make sure your new client is created.
New SAP Client created


NOTE: If the client role is production then the settings should be as follows:

  1. No changes allowed
  2. No changes to repository and cross client

iii. Protection level -1 – No overwriting

  1. eCATT & CATT Not Allowed

T000 – Table stores all the list of created clients

Step3… Make login/no_automatic_user_sapstar=0
Step-by-step procedure:

  • Login to the SAP system using client 000 or 001
  • Hit the transaction code ‘RZ10′
  • Click Utilities and click import profiles
  • Click back (F3)
  • Place the cursor at profile and click f4
  • Double click on Default profile
  • Select the Extended maintenance and click change
  • Click Parameter (Create F5)
  • Place the cursor at parameter name and click F4
  • Click a small triangle shape which is available on top of the box
  • Type *login* and hit enter
  • Double click on ‘login/no_automatic_user_sapstar’ parameter
  • Type the parameter value as ’0′
  • Click ‘Copy’ Tab and click back (F3)
  • Click ‘Copy’ Tab again to transfer The changed profile
  • Click Back (F3)
  • Click Save, Click back and Click Yes to activate the profile
  • Log out from the SAP system
  • Stop the SAP System
  • Start the SAP System
  • Now try to login as sap* and Password as ‘pass’
  • Now you should be able to login to newly created client using virtual user temporarily before doing client copy



Client Copy Using Client Export and Import 

How to do client copy using client export and import?


Today I am going to view about Client Copy using Client Export and Client Import procedure. It’s easy and safe way than direct client copy.

Client Export Steps:

1. Run SCC8
2. Select Profile for desired copy type (Usually All SAP_ALL or user master only SAP_USER. You will need direction from the requester as to the correct selection here. Use Profile -> Display Profile to display profile details.)
3. Select target System (or group)
4. De- Select “Test Run” (If selected)
5. Run Export
– Up to 3 requests are created, depending on the data selected and available:
1. “SIDKO00353″ for transporting client-independent data, if you have selected this
2. “SIDKT00353″ for transporting client-specific data
3. “SIDKX00353″ for transporting client-specific texts, provided texts are available in this client
6.Monitor TP logs for errors and export files for growth

Client Import Steps:

1. Create client (scc4)
2. Login to client (sap* – pass)
3. Manually add “O” transport then “X” then “T” to TMS buffer
4. Highlight #1 and use “Request -> Import” to launch import tool
5. Monitor “I” file in OS “/usr/sap/trans/tmp” dir for progress info
6. After Import is complete perform “post processing steps” from client tool (SCC7)

So, if you have 3 transport requests and the data has been exported successfully then go for an import in quality system.

This would help you to refresh QA server with PRD server data and settings.


About client copy from PRD to QAS through export-import, is it possible to do it in office hour (while all users are still active) and do I have to lock them first? Right now I have 164 GBs of data in the database, how long does it take. I use SUSE Ent v10 as the OS, what command should I use to increase the FS just in case the space is not enough.


You need un-peak hour to do client copy since this process would take most of your server resource.  I didn’t know how long it going to take time since it depend on your server hardware specification and your SAP config.

If you are using LVM and using ext3 or reiserfs than this would be simple step.

Steps when using reiserfs :

#lvextend -L +size /dev/volume_group/logical_volume
#resize_reiserfs -L +size /dev/volume_group/logical_volume

Steps when using ext2/ext3 :

#lvextend -L +size /dev/volume_group/logical_volume
#resize2fs /dev/volume_group/logical_volume

Just make sure you would contact your OS and hardware vendor before resizing of fs.



In Sap we have Three types of client copy’s
They are Local client copy By using T Code  SCCL
Remote client copy by using T Code SCC9
Client Export, Import method  by using T Codes SCC8, SCC7
SCC8for Client Export & SCC7for client import

In our case now we can see the Client export and import procedure with screen shots.
we have two scopes to do the task. One is export all client data & another one is only user master data export & import .
now we can see  User master data copy from one client to another client.

User Master Export client before client refreshment

Step 1: Call T Code SCC8 and select parameter

Step2.Click on Start immediately (or Schedule as Background job) and continue

step3: Click ok


  1. “DR1KO02438” for transporting cross-client data, if you have selected this
  2. “DR1KT02438” for transporting client-specific data
  3. “DR1KX02438” for transporting client-specific texts, provided texts are available in this client

Step4: Go to SE01 and click on transports. If you get this message means it still going on, click on refresh to get latest information.

Step5:Then go to Target client Check request in SE01 Transport and check the request number

Step6:Go to STMS  select request and click on Import Request  (if request in not in list then add it manually)

Step7:Select target client and click on Import

Step8:System will ask for password and enter

Step9:Import will be done in few minutes.

Step10:Now call  Tcode  to SCC7 and click on OK

Step11:Click on Schedule as Background Job, choose background server (using F4), select Immediately

Step12:Click on Schedule Job and then on Continue

step13:Click on OK. Check the log in SCC3

Step14:Go to SCC3

Step15: Double click log and check details

Basic SAP Data Types

The parameter types that the Microsoft BizTalk Adapter 3.0 for mySAP Business Suite supports are governed by the:

  • ABAP data types that SAP supports
  • Database data types that SAP supports

This section presents a mapping between the ABAP and database data types, and their corresponding .NET and XML schema types.

The information in this section applies to RFCs, tRFCs, and BAPIs. SAP data types are always represented as strings (xsd:string) in IDOCs. This is to support the BizTalk Server flat file parser.
The Microsoft BizTalk Adapter 3.0 for mySAP Business Suite supports safe typing for some ABAP data types. When safe typing is enabled, these data types are represented as strings. You configure safe typing by setting the EnableSafeTyping binding property. Safe typing is disabled by default. For more information about the SAP adapter binding properties, see Working with BizTalk Adapter 3.0 for mySAP Business Suite Binding Properties.

The following table shows how the ABAP data types are surfaced when safe typing is not enabled. (EnableSafeTyping is false). Data types that are surfaced differently when safe typing is enabled are marked with an asterisk (*).

ABAP Data Type RFC Type XSD type .NET type Format string
I (Integer) RFC_INT xsd:int Int32
Internal (RFC_INT1) RFC_INT1 xsd:unsignedByte Byte
Internal (RFC_INT2) RFC_INT2 xsd:short Int16
F (Float) RFC_FLOAT xsd:double Double
P (BCD number) RFC_BCD xsd:decimal if length <= 28
xsd:string if length > 28
Decimal number. with 0 decimal places
Decimal number. with >0 decimal places
C (Character) RFC_CHAR xsd:string String
D (Date: YYYYMMDD)* RFC_DATE xsd:dateTime DateTime Internally, the adapter deserializes the value into a DateTime object. It then invokes the DateTime.ToUniversalTime method to convert the value of this object to UTC. Finally the date component (DateTime.Date) is used to create the value that is sent to the SAP system. The SAP system treats this date value as local time.

You should specify date values as UTC to avoid conversion.

  • For xsd:dateTime, the following pattern is recommended: “(\d\d\d\d-\d\d-\d\d)T(00:00:00)(.*)Z”.
  • For DateTime objects set DateTime.Kind to DateTimeKind.Utc.
T (Time: HHMMSS)* RFC_TIME xsd:dateTime DateTime Internally, the adapter deserializes the value into a DateTime object. It then invokes the DateTime.ToUniversalTime method to convert the value of this object to UTC. Finally the time component (DateTime.Time) is used to create the value that is sent to the SAP system. The SAP system treats this time value as local time.

You should specify time values as UTC to avoid conversion.

  • For xsd:dateTime, the following pattern is recommended: “(0001-01-01)T(\d\d:\d\d:\d\d)(.*)”.
  • For DateTime objects set DateTime.Kind to DateTimeKind.Utc.

For example, if your local time is 9:15 am, express this as “(001-01-01)T(09:15:00)Z”

N (Numeric string)* RFC_NUM xsd:int if lenrth <= 9
xsd:long if length > 9 and <= 19
xsd:string if length > 19
X (Byte) RFC_BYTE xsd:base64Binary Byte[]
STRING RFC_STRING xsd:string String
XSTRING RFC_BYTE xsd:base64Binary Byte[]

*Indicates that the data type is surfaced differently when safe typing is enabled.

Safe Typing Enabled

The following table shows the ABAP data types that are surfaced differently when safe typing is enabled (the EnableSafeTyping binding property is true).

ABAP Data Type RFC Type XSD type .NET type Format string
D (Date: YYYYMMDD) RFC_DATE xsd:string String SAP date format: YYYYMMDD.

Characters are allowed for date digits, so the value is essentially an eight character string

T (Time: HHMMSS) RFC_TIME xsd:string String SAP time format: HHMMSS.

Characters are allowed for time digits, so the value is essentially a six character string

N (Numeric string) RFC_NUM xsd:string String An n character string; where n = length of the numc field.

ABAP data types that are not in this table are surfaced in the same way as when safe typing is not enabled.

Support for Date and Time Fields

When safe typing is not enabled, ABAP Date (D) and Time (T) types are surfaced as xsd:dateTime; however, the pattern facet surfaced for the Date and Time types is different.

  • The pattern facet for Date is: (\d\d\d\d-\d\d-\d\d)T(00:00:00)(.*)

    For example, July 7, 2007 (2007-07-07) is represented as:


  • The pattern facet for Time is: (0001-01-01)T(\d\d:\d\d:\d\d)(.*)

    For example, 18:30:30 (6:30 pm and 30 seconds) is represented as:


How does the Adapter Represent Minimum and Maximum Time Values on Inbound Messages (from SAP)?

The SAP adapter uses the following guidelines when it receives time values from the SAP system:

  • The adapter treats 000000 (hhmmss) and 240000 (hhmmss) as 0 hours, 0 mins, and 0 seconds.
The way in which the Microsoft BizTalk Adapter 3.0 for mySAP Business Suite surfaces database data types also depends on whether safe typing is enabled. The following table shows how the adapter surfaces database data types when safe typing is not enabled (the EnableSafeTyping binding property is false). Data types that are surfaced differently when safe typing is enabled are marked with an asterisk (*).

Database Data Type RFC Type XSD .NET Type
ACCP (Posting Period)* RFC_NUM xsd:int Int32
CHAR RFC_CHAR xsd:string String
CLNT (Client) RFC_CHAR xsd:string String
CURR (Currency field) RFC_BCD xsd:decimal

The SAP adapter rounds off the decimal values based on the definition of the DECIMAL parameter. For example, if a DECIMAL parameter can accept up to five digits after the decimal point, a value such as 4.000028 is rounded off to 4.00003.
CUKY (Currency Key) RFC_CHAR xsd:string String
DATS (Date field)* RFC_DATE xsd:dateTime

Internally, the adapter deserializes the value into a DateTime object. It then invokes the DateTime.ToUniversalTime method to convert the value of this object to UTC. Finally the date component (DateTime.Date) is used to create the value that is sent to the SAP system. The SAP system treats this date value as local time.

You should specify date values as UTC to avoid conversion. The following pattern is recommended: “(\d\d\d\d-\d\d-\d\d)T(00:00:00)(.*)Z”.


You should specify date values as UTC (DateTime.Kind = DateTimeKind.Utc) to avoid conversion.

DEC (Amount) RFC_BCD xsd:decimal

The SAP adapter rounds off the decimal values based on the definition of the DECIMAL parameter. For example, if a DECIMAL parameter can accept up to five digits after the decimal point, a value such as 4.000028 is rounded off to 4.00003.
FLTP (Floating point) RFC_FLOAT xsd:double Double
INT1 RFC_INT1 xsd:unsignedbyte Byte
INT2 RFC_INT2 xsd:short Int16
INT4 RFC_INT xsd:int Int32
LANG (Language Key) RFC_CHAR xsd:string String
LCHR RFC_STRING xsd:string String
LRAW (long byte seq) RFC_BYTE xsd:base64binary Byte[]
NUMC* RFC_NUM xsd:int
Int32 if length <=9
Int64 if length >9 and <=19
String if length > 19
PREC (Accuracy) RFC_INT2 xsd:short Int16
QUAN (Quantity) RFC_BCD xsd:decimal

The SAP adapter rounds off the decimal values based on the definition of the DECIMAL parameter. For example, if a DECIMAL parameter can accept up to five digits after the decimal point, a value such as 4.000028 is rounded off to 4.00003.
RAW (byte sequence) RFC_BYTE xsd:base64binary Byte[]
RAWSTRING (variable length) RFC_BYTE xsd:base64binary Byte[]
STRING (variable length) RFC_STRING xsd:string String
TIMS (Time field)* RFC_TIME xsd:datetime

Internally, the adapter deserializes the value into a DateTime object. It then invokes the DateTime.ToUniversalTime method to convert the value of this object to UTC. Finally the time component (DateTime.Time) is used to create the value that is sent to the SAP system. The SAP system treats this time value as local time.

You should specify time values as UTC to avoid conversion. The following pattern is recommended: “(0001-01-01)T(\d\d:\d\d:\d\d)(.*)Z”.

For example, if your local time is 9:15 am, express this as “(001-01-01)T(09:15:00)Z”


You should specify time values as UTC (DateTime.Kind = DateTimeKind.Utc) to avoid conversion.

UNIT (Unit for Qty) RFC_CHAR xsd:string String
[Unsupported] String

*Indicates that the adapter surfaces the data type differently when safe typing is enabled.

Safe Typing Enabled

The following table shows the database data types that are surfaced differently when safe typing is enabled (the EnableSafeTyping binding property is true).

 Database Data Type RFC Type XSD .NET type String Value Format
ACCP (Posting Period) RFC_NUM xsd:string String Character string
NUMC RFC_NUM xsd:string String Character string
DATS (Date field) RFC_DATE xsd:string String YYYYMMDD
TIMS (Time field) RFC_TIME xsd:string String HHMMSS

Data types that are not in this table are surfaced in the same way as when safe typing is not enabled.

The SAP adapter supports the following XSD facets.

RFC Type XSD Facet (EnableSafeTyping = false) XSD Facet (EnableSafeTyping= true)
RFC_BCD XSD pattern facet

Zero decimal places: "([\\-]{0,1})(([0-9]{1," + digitsBeforeDecimal + "}))"

One or more decimal places: "([\\-]{0,1})(([0-9]{0," + digitsBeforeDecimal +"}\\.[0-9]{0," + digitsAfterDecimal + "})|([0-9]{1," + digitsBeforeDecimal + "}))"

RFC_NUM XSD totalDigits facet if length <=19

XSD pattern facet if length > 19

XSD maxLength facet (depends on the length of the value on SAP)
RFC_DATE XSD pattern facet


Pattern contains time 00:00:00 to be compatible with xsd:datetime

XSD maxLength facet = 8
RFC_TIME XSD pattern facet


Pattern contains date 0001-01-01 to be compatible with xsd:datetime

XSD maxLength facet = 6
RFC_CHAR XSD maxLength facet same
The SAP adapter does not support the following data type:

  • ITAB II (hierarchical) table types


What is Transport Request? How to Import/Export it & check logs?

What is a Transport Request?

  • Transport Requests (TRs) – are also known as Change Requests. It is a kind of ‘Container / Collection’ of changes that are made in the development system . It also records the information regarding the type of change, purpose of transport, request category and the target system.
  • Each TR contains one or more change jobs, also known as change Tasks (minimum unit of transportable change) . Tasks are stored inside a TR, just like multiple files are stored in some folder. TR can be released only once all the tasks inside a TR are completed, released or deleted.
  • Change Task is actually a list of objects that are modified by a particular user. Each task can be assigned to (and released by) only one user, however multiple users can be assigned to each Transport Request (as it can contain multiple tasks). Tasks are not transportable by themselves, but only as a part of TR.


Change requests are named in a standard format as: <SID>K<Number> [Not modifiable by system administrators]

  • SID – System ID
  • K – Is fixed keyword/alphabet
  • Number – can be anything from a range starting with 900001


Example: DEVK900030

Tasks also use the same naming convention, with ‘numbers’ consecutive to the number used in TR containing them.

For Example, Tasks in the above mentioned TR Example can be named as: DEVK900031, DEVK900032 …

  • The project manager or designated lead is responsible to create a TR and assign the project members to the TR by creating task/s for each project member.
  • Hence, she/he is the owner with control of all the changes that are recorded in that TR and therefore, she/he can only release that TR.
  • However, assigned project members can release their respective change tasks, once completed.


Workbench Request – contains repository objects and also ‘cross-client‘ customizing objects. These requests are responsible for making changes in the ABAP workbench objects.

Customizing Request – contains objects that belong to ‘client-specific‘ customizing. As per client settings these requests are automatically recorded as per when users perform customizing settings and a target system is automatically assigned as per the transport layer (if defined).

SE01 – Transport Organizer – Extended View

Create a Change Request


  • Change Request can be created in two ways:
    • Automatic – Whenever creating or modifying an object, or when performing customizing settings, system itself displays the ‘Dialog box’ for creating a change request or mention name of an already created request, if available.
    • Manually – Create the change request from the Transport Organizer, and then enter required attributes and insert objects.


Release the Transport Request (Export Process)

  • Position the cursor on the TR name or a Task name & choose the Release icon (Truck), a record of the TR is automatically added to the appropriate import queues of the systems defined in the TMS.
  • Releasing and importing a request generates export & import logs.


The Import Process


Importing TRs into the target system

  • After the request owner releases the Transport Requests from Source system, changes should appear in quality and production system; however this is not an automatic process.
  • As soon as the export process completes (releasing of TRs), relevant files (Cofiles and Data files) are created in the common transport directory at OS level and the entry is made in the Import Buffer (OS View) / Import Queue (SAP App. View) of the QAS and PRD.
  • Now to perform the import, we need to access the import queue and for that we need to execute transaction code STMS -> Import Button OR select Overview -> Imports
  • It will show the list of systems in the current domain, description and number of requests available in Import Queue and the status.


Import Queue -> is the list of TRs available in the common directory and are ready to be imported in the target system, this is the SAP Application View, at the OS level it is also known as Import Buffer.


The Import status


Import Queue shows some standard ‘status icons‘ in the last column, here are the icons with their meanings, as defined by SAP:

In case, a request is not added automatically in the import queue/buffer, even though the OS level files are present, then we can add such requests by the following method, however, we should know the name of intended TR:

Import History


We can also check the previous imports that happened in the system as follows:

Transport logs and return codes

  • After the transport has been performed, system administrator must check whether it was performed properly or not, for that SAP has provided us with the following type of logs (SE01 -> GOTO -> Transport Logs) :
    • Action Log – which displays actions that have taken place: exports, test import, import and so forth.
    • Transport Logs – which keep a record of the transport log files.
  • One of the important information provided by logs are the return codes:
    • 0: The export was successful.
    • 4: Warning was issued but all objects were transported successfully.
    • 8: A warning was issued and at least one object could not be transported successfully.
    • 12 or higher: A critical error had occurred, generally not caused by the objects in the request.





Configuring the Transport Domain 


To configure and maintain the

transport domain, you need the authorization S_CTS_CONFIG contained in the profile S_A.SYSTEM.

Process Flow

First, you must decide which SAP System you want to

configure as the transport domain controller.

You can only carry out all the activities relevant to the entire transport domain, such as configuring transport routes or configuring RFC connections, in the domain controller. We therefore recommend configuring the transport domain controller in an SAP System with the following attributes:

  • High availability
  • High security precautions
  • Highest possible release

The transport domain controller should normally be configured in a production system or quality assurance system.


The resulting system load is low for the SAP System configured as the transport domain controller of a transport domain. Only if the TMS configuration is changed or if there is an error does the load on the domain controller increase for a short period.

When you have decided which system is to function as the transport domain controller and have configured it accordingly, you can include all additional systems in the transport domain.

Configuring the Transport Domain Controller 


You have

decided which system should be the Transport Domain Controller.


To configure a system as the transport domain controller (and thereby configure a new transport domain):

  1. Log on in client 000 in the SAP System that you want to configure as the transport domain controller.
  2. Enter Transaction STMS. The dialog box TMS: Configure Transport Domain appears.

(This dialog box only appears if you have not yet configured a transport domain.)

  1. Enter the name and a short description of the transport domain.


The name of the transport domain may not contain blank characters. You cannot change the name of the transport domain afterwards without reconfiguring the domain controller and thereby the entire transport domain.

  1. If your SAP System consists of

multiple application servers, you can choose one server for the TMS.

  1. Save your entries. The following actions are performed automatically in your SAP System:
    • The user TMSADM is created.
    • The RFC destinations required for the TMS are generated.
    • The TMS configuration is stored in the transport directory.
    • The transport profile for the transport control program

tp is generated.

    • The SAP System is configured as a

single system.


The configuration of the transport domain is now complete for this SAP System. The initial screen of Transaction STMS shows that this SAP System is now functioning as the domain controller of the transport domain.





Transport Layers and Transport Routes 

All development projects developed in the same SAP System and transported on the same transport routes are grouped together to form a

transport layer.

Before you start the first development project, you create a transport layer in the TMS

transport route editor. This transport layer is assigned to the development system as its standard transport layer. Objects delivered by SAP belong to the transport layer “SAP”. Other transport layers are generally only needed when new development systems are included in the system group.

After you have set up the transport layer you set up the transport routes. There are two types of transport routes. First you set up

consolidation routes, and then you set up delivery routes:

    1. Consolidation routes

To make your changes transportable, set up a consolidation route for each transport layer. Specify your development system as the starting point (source) of these consolidation routes. Specify the quality assurance system as the transport target (in a two-system landscape, specify the production system as the transport target).

Any modified objects that have a consolidation route set up for their transport layer are included in transportable change requests. After the request has been released the objects can be imported into the consolidation system.

If you make changes to objects which have no consolidation route defined for their transport layer, then the changes are made automatically in local change requests (or in Customizing requests without a transport target). You cannot transport them into other SAP Systems.

You can set up one consolidation route only for each SAP System and transport layer.


When you define consolidation routes, note the additional functions available when you use

Extended Transport Control.

  • Delivery routes


After you have imported your development work into the quality assurance system, you then want to transport it into your production system. You may even want to transport it into several SAP Systems (for example, additional training systems). To do this, you have to set up delivery routes.

Delivery routes have a source system and a target system.

When you set up a delivery route, you are making sure that all change requests that are imported into the route’s source system are automatically flagged for import into the route’s target system.

You can set up several delivery routes with the same source system and different target systems (parallel forwarding). You can also set up delivery routes in sequence (multilevel forwarding).

CTS transport control makes sure that all requests from the development system are flagged for import into all other SAP Systems in the same order in which they were exported. This is important, since different requests can contain the same Repository object or the same Customizing setting at different development levels, and you must avoid overwriting a more recent version with an older version.

Multilevel Delivery

Here you can activate multiple delivery routes in sequence. You can choose any SAP Systems in the system group as the source systems of the delivery routes; they do not have to be consolidation systems. This allows you to implement complex chains of transport routes.


This graphic is explained in the accompanying text


Multilevel delivery is not required in a two- or three-system group. In more complex system landscapes, particularly in layered development projects that have each other as sources, multilevel delivery may prove to be a suitable solution:


This graphic is explained in the accompanying text


If there are SAP Systems in the

system group with releases prior to 4.0, you can only use multilevel delivery under particular conditions. The Transport Management System checks these conditions when you configure the transport routes in a mixed system group.





Oracle Architecture

These notes introduce the Oracle server architecture.  The architecture includes physical components, memory components, processes, and logical structures.



Primary Architecture Components




The figure shown above details the Oracle architecture.


Oracle server:  An Oracle server includes an Oracle Instance and an Oracle database.

  • An Oracle database includes several different types of files:datafiles, control files, redo log files and archive redo log files.  The Oracle server also accesses parameter files and password files.
  • This set of files has several purposes.

o   One is to enable system users to process SQL statements.

o   Another is to improve system performance.

o   Still another is to ensure the database can be recovered if there is a software/hardware failure.

  • The database server must manage large amounts of data in a multi-user environment.
  • The server must manage concurrent access to the same data.
  • The server must deliver high performance.This generally means fast response times.


Oracle instance:  An Oracle Instance consists of two different sets of components:

  • The first component set is the set of background processes (PMON, SMON, RECO, DBW0, LGWR, CKPT, D000 and others).

o   These will be covered later in detail – each background process is a computer program.

o   These processes perform input/output and monitor other Oracle processes to provide good performance and database reliability.

  • The second component set includes the memory structures that comprise the Oracle instance.

o   When an instance starts up, a memory structure called the System Global Area (SGA) is allocated.

o   At this point the background processes also start.

  • An Oracle Instance provides access to one and only one Oracle database.


Oracle database: An Oracle database consists of files.

  • Sometimes these are referred to as operating system files, but they are actually database files that store the database information that a firm or organization needs in order to operate.
  • The redo log files are used to recover the database in the event of application program failures, instance failures and other minor failures.
  • The archived redo log files are used to recover the database if a disk fails.
  • Other files not shown in the figure include:

o   The required parameter file that is used to specify parameters for configuring an Oracle instance when it starts up.

o   The optional password file authenticates special users of the database – these are termed privileged users and include database administrators.

o   Alert and Trace Log Files – these files store information about errors and actions taken that affect the configuration of the database.


User and server processes:  The processes shown in the figure are called user and server processes.  These processes are used to manage the execution of SQL statements.

  • A Shared Server Process can share memory and variable processing for multiple user processes.
  • A Dedicated Server Process manages memory and variables for a single user process.



This figure from the Oracle Database Administration Guide provides another way of viewing the SGA.




Connecting to an Oracle Instance – Creating a Session




System users can connect to an Oracle database through SQLPlus or through an application program like the Internet Developer Suite (the program becomes the system user).  This connection enables users to execute SQL statements.


The act of connecting creates a communication pathway between a user process and an Oracle Server.  As is shown in the figure above, the User Process communicates with the Oracle Server through a Server Process.  The User Process executes on the client computer.  The Server Process executes on the server computer, and actually executes SQL statements submitted by the system user.


The figure shows a one-to-one correspondence between the User and Server Processes.  This is called a Dedicated Server connection.  An alternative configuration is to use a Shared Server where more than one User Process shares a Server Process.


Sessions:  When a user connects to an Oracle server, this is termed a session.  The User Global Area is session memory and these memory structures are described later in this document.  The session starts when the Oracle server validates the user for connection.  The session ends when the user logs out (disconnects) or if the connection terminates abnormally (network failure or client computer failure).


A user can typically have more than one concurrent session, e.g., the user may connect using SQLPlus and also connect using Internet Developer Suite tools at the same time.  The limit of concurrent session connections is controlled by the DBA.


If a system users attempts to connect and the Oracle Server is not running, the system user receives the Oracle Not Available error message.



Physical Structure – Database Files


As was noted above, an Oracle database consists of physical files.  The database itself has:

  • Datafiles – these contain the organization’s actual data.
  • Redo log files – these contain a chronological record of changes made to the database, and enable recovery when failures occur.
  • Control files – these are used to synchronize all database activities and are covered in more detail in a later module.



Other key files as noted above include:

  • Parameter file – there are two types of parameter files.

o   The init.ora file (also called the PFILE) is a static parameter file.  It contains parameters that specify how the database instance is to start up.  For example, some parameters will specify how to allocate memory to the various parts of the system global area.

o   The spfile.ora is a dynamic parameter file.  It also stores parameters to specify how to startup a database; however, its parameters can be modified while the database is running.

  • Password file – specifies which *special* users are authenticated to startup/shut down an Oracle Instance.
  • Archived redo log files – these are copies of the redo log files and are necessary for recovery in an online, transaction-processing environment in the event of a disk failure.



Memory Management and Memory Structures


Oracle Database Memory Management


Memory management – focus is to maintain optimal sizes for memory structures.

  • Memory is managed based on memory-related initialization parameters.
  • These values are stored in the init.ora file for each database.


Three basic options for memory management are as follows:

  • Automatic memory management:

o   DBA specifies the target size for instance memory.

o   The database instance automatically tunes to the target memory size.

o   Database redistributes memory as needed between the SGA and the instance PGA.


  • Automatic shared memory management:

o   This management mode is partially automated.

o   DBA specifies the target size for the SGA.

o   DBA can optionally set an aggregate target size for the PGA or managing PGA work areas individually.


  • Manual memory management:

o   Instead of setting the total memory size, the DBA sets many initialization parameters to manage components of the SGA and instance PGA individually.


If you create a database with Database Configuration Assistant (DBCA) and choose the basic installation option, then automatic memory management is the default.


The memory structures include three areas of memory:

  • System Global Area (SGA) – this is allocated when an Oracle Instance starts up.
  • Program Global Area (PGA) – this is allocated when a Server Process starts up.
  • User Global Area (UGA) – this is allocated when a user connects to create a session.


System Global Area


The SGA is a read/write memory area that stores information shared by all database processes and by all users of the database (sometimes it is called theShared Global Area).

o   This information includes both organizational data and control information used by the Oracle Server.

o   The SGA is allocated in memory and virtual memory.

o   The size of the SGA can be established by a DBA by assigning a value to the parameter SGA_MAX_SIZE in the parameter file—this is an optional parameter.


The SGA is allocated when an Oracle instance (database) is started up based on values specified in the initialization parameter file (either PFILE or SPFILE).


The SGA has the following mandatory memory structures:

  • Database Buffer Cache
  • Redo Log Buffer
  • Java Pool
  • Streams Pool
  • Shared Pool – includes two components:

o   Library Cache

o   Data Dictionary Cache

  • Other structures (for example, lock and latch management, statistical data)


Additional optional memory structures in the SGA include:

  • Large Pool


The SHOW SGA SQL command will show you the SGA memory allocations.

  • This is a recent clip of the SGA for the DBORCL database at SIUE.
  • In order to execute SHOW SGA you must be connected with the special privilege SYSDBA (which is only available to user accounts that are members of the DBA Linux group).


SQL> connect / as sysdba


SQL> show sga


Total System Global Area 1610612736 bytes

Fixed Size                  2084296 bytes

Variable Size            1006633528 bytes

Database Buffers          587202560 bytes

Redo Buffers               14692352 bytes



Early versions of Oracle used a Static SGA.  This meant that if modifications to memory management were required, the database had to be shutdown, modifications were made to the init.ora parameter file, and then the database had to be restarted.


Oracle 11g uses a Dynamic SGA.   Memory configurations for the system global area can be made without shutting down the database instance.  The DBA can resize the Database Buffer Cache and Shared Pool dynamically.


Several initialization parameters are set that affect the amount of random access memory dedicated to the SGA of an Oracle Instance.  These are:


  • SGA_MAX_SIZE: This optional parameter is used to set a limit on the amount of virtual memory allocated to the SGA – a typical setting might be 1 GB; however, if the value for SGA_MAX_SIZE in the initialization parameter file or server parameter file is less than the sum the memory allocated for all components, either explicitly in the parameter file or by default, at the time the instance is initialized, then the database ignores the setting for SGA_MAX_SIZE.  For optimal performance, the entire SGA should fit in real memory to eliminate paging to/from disk by the operating system.
  • DB_CACHE_SIZE: This optional parameter is used to tune the amount memory allocated to the Database Buffer Cache in standard database blocks. Block sizes vary among operating systems.  The DBORCL database uses 8 KB blocks.  The total blocks in the cache defaults to 48 MB on LINUX/UNIX and 52 MB on Windows operating systems.
  • LOG_BUFFER: This optional parameter specifies the number of bytes allocated for the Redo Log Buffer.
  • SHARED_POOL_SIZE: This optional parameter specifies the number of bytes of memory allocated to shared SQL and PL/SQL.  The default is 16 MB.  If the operating system is based on a 64 bit configuration, then the default size is 64 MB.
  • LARGE_POOL_SIZE: This is an optional memory object – the size of the Large Pool defaults to zero.  If the init.ora parameterPARALLEL_AUTOMATIC_TUNING is set to TRUE, then the default size is automatically calculated.
  • JAVA_POOL_SIZE: This is another optional memory object.  The default is 24 MB of memory.


The size of the SGA cannot exceed the parameter SGA_MAX_SIZE minus the combination of the size of the additional parameters, DB_CACHE_SIZE,LOG_BUFFER, SHARED_POOL_SIZE, LARGE_POOL_SIZE, and JAVA_POOL_SIZE.


Memory is allocated to the SGA as contiguous virtual memory in units termed granules.  Granule size depends on the estimated total size of the SGA, which as was noted above, depends on the SGA_MAX_SIZE parameter.  Granules are sized as follows:

  • If the SGA is less than 1 GB in total, each granule is 4 MB.
  • If the SGA is greater than 1 GB in total, each granule is 16 MB.


Granules are assigned to the Database Buffer Cache, Shared Pool, Java Pool, and other memory structures, and these memory components can dynamically grow and shrink.  Using contiguous memory improves system performance.  The actual number of granules assigned to one of these memory components can be determined by querying the database view named V$BUFFER_POOL.


Granules are allocated when the Oracle server starts a database instance in order to provide memory addressing space to meet the SGA_MAX_SIZE parameter.  The minimum is 3 granules:  one each for the fixed SGA, Database Buffer Cache, and Shared Pool.  In practice, you’ll find the SGA is allocated much more memory than this.  The SELECT statement shown below shows a current_size of 1,152 granules.


SELECT name, block_size, current_size, prev_size, prev_buffers

FROM v$buffer_pool;



——————– ———- ———— ———- ————

DEFAULT                    8192          560        576        71244


For additional information on the dynamic SGA sizing, enroll in Oracle’s Oracle11g Database Performance Tuning course.



Program Global Area (PGA)


A PGA is:

  • a nonshared memory region that contains data and control information exclusively for use by an Oracle process.
  • A PGA is created by Oracle Database when an Oracle process is started.
  • One PGA exists for each Server Process and each Background Process. It stores data and control information for a single Server Process or a singleBackground Process.
  • It is allocated when a process is created and the memory is scavenged by the operating system when the process terminates.This is NOT a shared part of memory – one PGA to each process only.
  • The collection of individual PGAs is the total instance PGA, or instance PGA.
  • Database initialization parameters set the size of the instance PGA, not individual PGAs.


The Program Global Area is also termed the Process Global Area (PGA) and is a part of memory allocated that is outside of the Oracle Instance.





The content of the PGA varies, but as shown in the figure above, generally includes the following:


  • Private SQL Area: Stores information for a parsed SQL statement – stores bind variable values and runtime memory allocations.  A user session issuing SQL statements has a Private SQL Area that may be associated with a Shared SQL Area if the same SQL statement is being executed by more than one system user.  This often happens in OLTP environments where many users are executing and using the same application program.

o   Dedicated Server environment – the Private SQL Area is located in the Program Global Area.

o   Shared Server environment – the Private SQL Area is located in the System Global Area.


  • Session Memory: Memory that holds session variables and other session information.


  • SQL Work Areas: Memory allocated for sort, hash-join, bitmap merge, and bitmap create types of operations.

o   Oracle 9i and later versions enable automatic sizing of the SQL Work Areas by setting the WORKAREA_SIZE_POLICY = AUTO parameter (this is the default!) and PGA_AGGREGATE_TARGET = n (where n is some amount of memory established by the DBA).  However, the DBA can let the Oracle DBMS determine the appropriate amount of memory.



User Global Area

The User Global Area is session memory.



A session that loads a PL/SQL package into memory has the package state stored to the UGA.  The package state is the set of values stored in all the package variables at a specific time. The state changes as program code the variables. By default, package variables are unique to and persist for the life of the session.

The OLAP page pool is also stored in the UGA. This pool manages OLAP data pages, which are equivalent to data blocks. The page pool is allocated at the start of an OLAP session and released at the end of the session.  An OLAP session opens automatically whenever a user queries a dimensional object such as acube.

Note:  Oracle OLAP is a multidimensional analytic engine embedded in Oracle Database 11g.  Oracle OLAP cubes deliver sophisticated calculations using simple SQL queries – producing results with speed of thought response times.

The UGA must be available to a database session for the life of the session.  For this reason, the UGA cannot be stored in the PGA when using a shared serverconnection because the PGA is specific to a single process.  Therefore, the UGA is stored in the SGA when using shared server connections, enabling any shared server process access to it. When using a dedicated server connection, the UGA is stored in the PGA.


Automatic Shared Memory Management


Prior to Oracle 10G, a DBA had to manually specify SGA Component sizes through the initialization parameters, such as SHARED_POOL_SIZE, DB_CACHE_SIZE, JAVA_POOL_SIZE, and LARGE_POOL_SIZE parameters.


Automatic Shared Memory Management enables a DBA to specify the total SGA memory available through the SGA_TARGET initialization parameter.  The Oracle Database automatically distributes this memory among various subcomponents to ensure most effective memory utilization.


The DBORCL database SGA_TARGET is set in the initDBORCL.ora file:




With automatic SGA memory management, the different SGA components are flexibly sized to adapt to the SGA available.


Setting a single parameter simplifies the administration task – the DBA only specifies the amount of SGA memory available to an instance – the DBA can forget about the sizes of individual components. No out of memory errors are generated unless the system has actually run out of memory.  No manual tuning effort is needed.


The SGA_TARGET initialization parameter reflects the total size of the SGA and includes memory for the following components:

  • Fixed SGA and other internal allocations needed by the Oracle Database instance
  • The log buffer
  • The shared pool
  • The Java pool
  • The buffer cache
  • The keep and recycle buffer caches (if specified)
  • Nonstandard block size buffer caches (if specified)
  • The Streams Pool


If SGA_TARGET is set to a value greater than SGA_MAX_SIZE at startup, then the SGA_MAX_SIZE value is bumped up to accommodate SGA_TARGET.

When you set a value for SGA_TARGET, Oracle Database 11g automatically sizes the most commonly configured components, including:

  • The shared pool (for SQL and PL/SQL execution)
  • The Java pool (for Java execution state)
  • The large pool (for large allocations such as RMAN backup buffers)
  • The buffer cache


There are a few SGA components whose sizes are not automatically adjusted. The DBA must specify the sizes of these components explicitly, if they are needed by an application. Such components are:

  • Keep/Recycle buffer caches (controlled by DB_KEEP_CACHE_SIZE and DB_RECYCLE_CACHE_SIZE)
  • Additional buffer caches for non-standard block sizes (controlled by DB_nK_CACHE_SIZE, n = {2, 4, 8, 16, 32})
  • Streams Pool (controlled by the new parameter STREAMS_POOL_SIZE)


The granule size that is currently being used for the SGA for each component can be viewed in the view V$SGAINFO. The size of each component and the time and type of the last resize operation performed on each component can be viewed in the view V$SGA_DYNAMIC_COMPONENTS.


SQL> select * from v$sgainfo;



NAME                                  BYTES RES

——————————– ———- —

Fixed SGA Size                      2084296 No

Redo Buffers                       14692352 No

Buffer Cache Size                 587202560 Yes

Shared Pool Size                  956301312 Yes

Large Pool Size                    16777216 Yes

Java Pool Size                     33554432 Yes93

Streams Pool Size                         0 Yes

Granule Size                       16777216 No

Maximum SGA Size                 1610612736 No

Startup overhead in Shared Pool    67108864 No

Free SGA Memory Available                 0


11 rows selected.


Shared Pool




The Shared Pool is a memory structure that is shared by all system users.

  • It caches various types of program data. For example, the shared pool stores parsed SQL, PL/SQL code, system parameters, and data dictionaryinformation.
  • The shared pool is involved in almost every operation that occurs in the database. For example, if a user executes a SQL statement, then Oracle Database accesses the shared pool.
  • It consists of both fixed and variable structures.
  • The variable component grows and shrinks depending on the demands placed on memory size by system users and application programs.


Memory can be allocated to the Shared Pool by the parameter SHARED_POOL_SIZE in the parameter file.  The default value of this parameter is 8MB on 32-bit platforms and 64MB on 64-bit platforms. Increasing the value of this parameter increases the amount of memory reserved for the shared pool.


You can alter the size of the shared pool dynamically with the ALTER SYSTEM SET command.  An example command is shown in the figure below.  You must keep in mind that the total memory allocated to the SGA is set by the SGA_TARGET parameter (and may also be limited by the SGA_MAX_SIZE if it is set), and since the Shared Pool is part of the SGA, you cannot exceed the maximum size of the SGA.  It is recommended to let Oracle optimize the Shared Pool size.


The Shared Pool stores the most recently executed SQL statements and used data definitions.  This is because some system users and application programs will tend to execute the same SQL statements often.  Saving this information in memory can improve system performance.


The Shared Pool includes several cache areas described below.


Library Cache


Memory is allocated to the Library Cache whenever an SQL statement is parsed or a program unit is called.  This enables storage of the most recently used SQL and PL/SQL statements.


If the Library Cache is too small, the Library Cache must purge statement definitions in order to have space to load new SQL and PL/SQL statements.  Actual management of this memory structure is through a Least-Recently-Used (LRU) algorithm.  This means that the SQL and PL/SQL statements that are oldest and least recently used are purged when more storage space is needed.


The Library Cache is composed of two memory subcomponents:

  • Shared SQL: This stores/shares the execution plan and parse tree for SQL statements, as well as PL/SQL statements such as functions, packages, and triggers.  If a system user executes an identical statement, then the statement does not have to be parsed again in order to execute the statement.
  • Private SQL Area: With a shared server, each session issuing a SQL statement has a private SQL area in its PGA.

o   Each user that submits the same statement has a private SQL area pointing to the same shared SQL area.

o   Many private SQL areas in separate PGAs can be associated with the same shared SQL area.

o   This figure depicts two different client processes issuing the same SQL statement – the parsed solution is already in the Shared SQL Area.




Data Dictionary Cache


The Data Dictionary Cache is a memory structure that caches data dictionary information that has been recently used.

  • This cache is necessary because the data dictionary is accessed so often.
  • Information accessed includes user account information, datafile names, table descriptions, user privileges, and other information.


The database server manages the size of the Data Dictionary Cache internally and the size depends on the size of the Shared Pool in which the Data Dictionary Cache resides.  If the size is too small, then the data dictionary tables that reside on disk must be queried often for information and this will slow down performance.


Server Result Cache


The Server Result Cache holds result sets and not data blocks. The server result cache contains the SQL query result cache and PL/SQL function result cache, which share the same infrastructure.


SQL Query Result Cache


This cache stores the results of queries and query fragments.

  • Using the cache results for future queries tends to improve performance.
  • For example, suppose an application runs the same SELECT statement repeatedly. If the results are cached, then the database returns them immediately.
  • In this way, the database avoids the expensive operation of rereading blocks and recomputing results.


PL/SQL Function Result Cache


The PL/SQL Function Result Cache stores function result sets.

  • Without caching, 1000 calls of a function at 1 second per call would take 1000 seconds.
  • With caching, 1000 function calls with the same inputs could take 1 second total.
  • Good candidates for result caching are frequently invoked functions that depend on relatively static data.
  • PL/SQL function code can specify that results be cached.



Buffer Caches


A number of buffer caches are maintained in memory in order to improve system response time.


Database Buffer Cache


The Database Buffer Cache is a fairly large memory object that stores the actual data blocks that are retrieved from datafiles by system queries and other data manipulation language commands.


The purpose is to optimize physical input/output of data.


When Database Smart Flash Cache (flash cache) is enabled, part of the buffer cache can reside in the flash cache.

  • This buffer cache extension is stored on a flash disk device, which is a solid state storage device that uses flash memory.
  • The database can improve performance by caching buffers in flash memory instead of reading from magnetic disk.
  • Database Smart Flash Cache is available only in Solaris and Oracle Enterprise Linux.


A query causes a Server Process to look for data.

  • The first look is in the Database Buffer Cache to determine if the requested information happens to already be located in memory – thus the information would not need to be retrieved from disk and this would speed up performance.
  • If the information is not in the Database Buffer Cache, the Server Process retrieves the information from disk and stores it to the cache.
  • Keep in mind that information read from disk is read a block at a time, NOT a row at a time, because a database block is the smallest addressable storage space on disk.


Database blocks are kept in the Database Buffer Cache according to a Least Recently Used (LRU) algorithm and are aged out of memory if a buffer cache block is not used in order to provide space for the insertion of newly needed database blocks.


There are three buffer states:

  • Unused – a buffer is available for use – it has never been used or is currently unused.
  • Clean – a buffer that was used earlier – the data has been written to disk.
  • Dirty – a buffer that has modified data that has not been written to disk.


Each buffer has one of two access modes:

  • Pinned – a buffer is pinned so it does not age out of memory.
  • Free (unpinned).


The buffers in the cache are organized in two lists:

  • the write list and,
  • the least recently used (LRU) list.


The write list (also called a write queue) holds dirty buffers – these are buffers that hold that data that has been modified, but the blocks have not been written back to disk.


The LRU list holds unused, free clean buffers, pinned buffers, and free dirty buffers that have not yet been moved to the write list.  Free clean buffers do not contain any useful data and are available for use.  Pinned buffers are currently being accessed.


When an Oracle process accesses a buffer, the process moves the buffer to the most recently used (MRU) end of the LRU list – this causes dirty buffers to age toward the LRU end of the LRU list.


When an Oracle user process needs a data row, it searches for the data in the database buffer cache because memory can be searched more quickly than hard disk can be accessed.  If the data row is already in the cache (a cache hit), the process reads the data from memory; otherwise a cache miss occurs and data must be read from hard disk into the database buffer cache.


Before reading a data block into the cache, the process must first find a free buffer. The process searches the LRU list, starting at the LRU end of the list.  The search continues until a free buffer is found or until the search reaches the threshold limit of buffers.


Each time a user process finds a dirty buffer as it searches the LRU, that buffer is moved to the write list and the search for a free buffer continues.


When a user process finds a free buffer, it reads the data block from disk into the buffer and moves the buffer to the MRU end of the LRU list.


If an Oracle user process searches the threshold limit of buffers without finding a free buffer, the process stops searching the LRU list and signals the DBWn background process to write some of the dirty buffers to disk.  This frees up some buffers.


Database Buffer Cache Block Size


The block size for a database is set when a database is created and is determined by the init.ora parameter file parameter named DB_BLOCK_SIZE.

  • Typical block sizes are 2KB, 4KB, 8KB, 16KB, and 32KB.
  • The size of blocks in the Database Buffer Cache matches the block size for the database.
  • The DBORCL database uses an 8KB block size.
  • This figure shows that the use of non-standard block sizes results in multiple database buffer cache memory allocations.




Because tablespaces that store oracle tables can use different (non-standard) block sizes, there can be more than one Database Buffer Cache allocated to match block sizes in the cache with the block sizes in the non-standard tablespaces.


The size of the Database Buffer Caches can be controlled by the parameters DB_CACHE_SIZE and DB_nK_CACHE_SIZE to dynamically change the memory allocated to the caches without restarting the Oracle instance.


You can dynamically change the size of the Database Buffer Cache with the ALTER SYSTEM command like the one shown here:




You can have the Oracle Server gather statistics about the Database Buffer Cache to help you size it to achieve an optimal workload for the memory allocation. This information is displayed from the V$DB_CACHE_ADVICE view.   In order for statistics to be gathered, you can dynamically alter the system by using theALTER SYSTEM SET DB_CACHE_ADVICE (OFF, ON, READY) command.  However, gathering statistics on system performance always incurs some overhead that will slow down system performance.


SQL> ALTER SYSTEM SET db_cache_advice = ON;


System altered.


SQL> DESC V$DB_cache_advice;

 Name                                      Null?    Type

 —————————————– ——– ————-

 ID                                                 NUMBER

 NAME                                               VARCHAR2(20)

 BLOCK_SIZE                                         NUMBER

 ADVICE_STATUS                                      VARCHAR2(3)

 SIZE_FOR_ESTIMATE                                  NUMBER

 SIZE_FACTOR                                        NUMBER

 BUFFERS_FOR_ESTIMATE                               NUMBER

 ESTD_PHYSICAL_READ_FACTOR                          NUMBER

 ESTD_PHYSICAL_READS                                NUMBER

 ESTD_PHYSICAL_READ_TIME                            NUMBER

 ESTD_PCT_OF_DB_TIME_FOR_READS                      NUMBER

 ESTD_CLUSTER_READS                                 NUMBER

 ESTD_CLUSTER_READ_TIME                             NUMBER


SQL> SELECT name, block_size, advice_status FROM v$db_cache_advice;


NAME                 BLOCK_SIZE ADV

——————– ———- —

DEFAULT                    8192 ON

<more rows will display>

21 rows selected.


SQL> ALTER SYSTEM SET db_cache_advice = OFF;


System altered.



KEEP Buffer Pool


This pool retains blocks in memory (data from tables) that are likely to be reused throughout daily processing.  An example might be a table containing user names and passwords or a validation table of some type.


The DB_KEEP_CACHE_SIZE parameter sizes the KEEP Buffer Pool.


RECYCLE Buffer Pool


This pool is used to store table data that is unlikely to be reused throughout daily processing – thus the data blocks are quickly removed from memory when not needed.


The DB_RECYCLE_CACHE_SIZE parameter sizes the Recycle Buffer Pool.




Redo Log Buffer



The Redo Log Buffer memory object stores images of all changes made to database blocks.

  • Database blocks typically store several table rows of organizational data.This means that if a single column value from one row in a block is changed, the block image is stored.  Changes include INSERT, UPDATE, DELETE, CREATE, ALTER, or DROP.
  • LGWR writes redo sequentially to disk while DBWn performs scattered writes of data blocks to disk.

o   Scattered writes tend to be much slower than sequential writes.

o   Because LGWR enable users to avoid waiting for DBWn to complete its slow writes, the database delivers better performance.


The Redo Log Buffer as a circular buffer that is reused over and over.  As the buffer fills up, copies of the images are stored to the Redo Log Files that are covered in more detail in a later module.



Large Pool


The Large Pool is an optional memory structure that primarily relieves the memory burden placed on the Shared Pool.  The Large Pool is used for the following tasks if it is allocated:

  • Allocating space for session memory requirements from the User Global Area where a Shared Server is in use.
  • Transactions that interact with more than one database, e.g., a distributed database scenario.
  • Backup and restore operations by the Recovery Manager (RMAN) process.

o   RMAN uses this only if the BACKUP_DISK_IO = n and BACKUP_TAPE_IO_SLAVE = TRUE parameters are set.

o   If the Large Pool is too small, memory allocation for backup will fail and memory will be allocated from the Shared Pool.

  • Parallel execution message buffers for parallel server operations.The PARALLEL_AUTOMATIC_TUNING = TRUE parameter must be set.


The Large Pool size is set with the LARGE_POOL_SIZE parameter – this is not a dynamic parameter.  It does not use an LRU list to manage memory.



Java Pool


The Java Pool is an optional memory object, but is required if the database has Oracle Java installed and in use for Oracle JVM (Java Virtual Machine).

  • The size is set with the JAVA_POOL_SIZE parameter that defaults to 24MB.
  • The Java Pool is used for memory allocation to parse Java commands and to store data associated with Java commands.
  • Storing Java code and data in the Java Pool is analogous to SQL and PL/SQL code cached in the Shared Pool.



Streams Pool


This pool stores data and control structures to support the Oracle Streams feature of Oracle Enterprise Edition.

  • Oracle Steams manages sharing of data and events in a distributed environment.
  • It is sized with the parameter STREAMS_POOL_SIZE.
  • If STEAMS_POOL_SIZE is not set or is zero, the size of the pool grows dynamically.







You need to understand three different types of Processes:

  • User Process: Starts when a database user requests to connect to an Oracle Server.
  • Server Process: Establishes the Connection to an Oracle Instance when a User Process requests connection – makes the connection for the User Process.
  • Background Processes: These start when an Oracle Instance is started up.



Client Process


In order to use Oracle, you must connect to the database.  This must occur whether you’re using SQLPlus, an Oracle tool such as Designer or Forms, or an application program.  The client process is also termed the user process in some Oracle documentation.



This generates a User Process (a memory object) that generates programmatic calls through your user interface (SQLPlus, Integrated Developer Suite, or application program) that creates a session and causes the generation of a Server Process that is either dedicated or shared.




Server Process



A Server Process is the go-between for a Client Process and the Oracle Instance.

  • Dedicated Server environment – there is a single Server Process to serve each Client Process.
  • Shared Server environment – a Server Process can serve several User Processes, although with some performance reduction.
  • Allocation of server process in a dedicated environment versus a shared environment is covered in further detail in the Oracle11g Database Performance Tuning course offered by Oracle Education.



Background Processes


As is shown here, there are both mandatory, optional, and slave background processes that are started whenever an Oracle Instance starts up.  These background processes serve all system users.  We will cover mandatory process in detail.


Mandatory Background Processes

  • Process Monitor Process (PMON)
  • System Monitor Process (SMON)
  • Database Writer Process (DBWn)
  • Log Writer Process (LGWR)
  • Checkpoint Process (CKPT)
  • Manageability Monitor Processes (MMON and MMNL)
  • Recover Process (RECO)


Optional Processes

  • Archiver Process (ARCn)
  • Coordinator Job Queue (CJQ0)
  • Dispatcher (number “nnn”) (Dnnn)
  • Others


This query will display all background processes running to serve a database:










The Process Monitor (PMON) monitors other background processes.

  • It is a cleanup type of process that cleans up after failed processes.
  • Examples include the dropping of a user connection due to a network failure or the abnormal termination (ABEND) of a user application program.
  • It cleans up the database buffer cache and releases resources that were used by a failed user process.
  • It does the tasks shown in the figure below.






The System Monitor (SMON) does system-level cleanup duties.

  • It is responsible for instance recovery by applying entries in the online redo log files to the datafiles.
  • Other processes can call SMON when it is needed.
  • It also performs other activities as outlined in the figure shown below.



If an Oracle Instance fails, all information in memory not written to disk is lost.  SMON is responsible for recovering the instance when the database is started up again.  It does the following:

  • Rolls forward to recover data that was recorded in a Redo Log File, but that had not yet been recorded to a datafile by DBWn.SMON reads the Redo Log Files and applies the changes to the data blocks.  This recovers all transactions that were committed because these were written to the Redo Log Files prior to system failure.
  • Opens the database to allow system users to logon.
  • Rolls back uncommitted transactions.


SMON also does limited space management.  It combines (coalesces) adjacent areas of free space in the database’s datafiles for tablespaces that are dictionary managed.


It also deallocates temporary segments to create free space in the datafiles.



DBWn (also called DBWR in earlier Oracle Versions)


The Database Writer writes modified blocks from the database buffer cache to the datafiles.


  • One database writer process (DBW0) is sufficient for most systems.
  • A DBA can configure up to 20 DBWn processes (DBW0 through DBW9 and DBWa through DBWj) in order to improve write performance for a system that modifies data heavily.
  • The initialization parameter DB_WRITER_PROCESSES specifies the number of DBWn processes.


The purpose of DBWn is to improve system performance by caching writes of database blocks from the Database Buffer Cache back to datafiles.

  • Blocks that have been modified and that need to be written back to disk are termed “dirty blocks.”
  • The DBWn also ensures that there are enough free buffers in the Database Buffer Cache to service Server Processes that may be reading data from datafiles into the Database Buffer Cache.
  • Performance improves because by delaying writing changed database blocks back to disk, a Server Process may find the data that is needed to meet a User Process request already residing in memory!


  • DBWn writes to datafiles when one of these events occurs that is illustrated in the figure below.







The Log Writer (LGWR) writes contents from the Redo Log Buffer to the Redo Log File that is in use.

  • These are sequential writes since the Redo Log Files record database modifications based on the actual time that the modification takes place.
  • LGWR actually writes before the DBWn writes and only confirms that a COMMIT operation has succeeded when the Redo Log Buffer contents are successfully written to disk.
  • LGWR can also call the DBWn to write contents of the Database Buffer Cache to disk.
  • The LGWR writes according to the events illustrated in the figure shown below.






The Checkpoint (CPT) process writes information to update the database control files and headers of datafiles.

  • A checkpoint identifies a point in time with regard to the Redo Log Files where instance recovery is to begin should it be necessary.
  • It can tell DBWn to write blocks to disk.
  • A checkpoint is taken at a minimum, once every three seconds.



Think of a checkpoint record as a starting point for recovery.  DBWn will have completed writing all buffers from the Database Buffer Cache to disk prior to the checkpoint, thus those records will not require recovery.  This does the following:

  • Ensures modified data blocks in memory are regularly written to disk – CKPT can call the DBWn process in order to ensure this and does so when writing a checkpoint record.
  • Reduces Instance Recovery time by minimizing the amount of work needed for recovery since only Redo Log File entries processed since the last checkpoint require recovery.
  • Causes all committed data to be written to datafiles during database shutdown.



If a Redo Log File fills up and a switch is made to a new Redo Log File (this is covered in more detail in a later module), the CKPT process also writes checkpoint information into the headers of the datafiles.


Checkpoint information written to control files includes the system change number (the SCN is a number stored in the control file and in the headers of the database files that are used to ensure that all files in the system are synchronized), location of which Redo Log File is to be used for recovery, and other information.


CKPT does not write data blocks or redo blocks to disk – it calls DBWn and LGWR as necessary.



The Manageability Monitor Process (MMNO) performs tasks related to the Automatic Workload Repository (AWR) – a repository of statistical data in the SYSAUX tablespace (see figure below) – for example, MMON writes when a metric violates its threshold value, taking snapshots, and capturing statistics value for recently modified SQL objects.



The Manageability Monitor Lite Process (MMNL) writes statistics from the Active Session History (ASH) buffer in the SGA to disk. MMNL writes to disk when the ASH buffer is full.


The information stored by these processes is used for performance tuning – we survey performance tuning in a later module.




The Recoverer Process (RECO) is used to resolve failures of distributed transactions in a distributed database.

  • Consider a database that is distributed on two servers – one in St. Louis and one in Chicago.
  • Further, the database may be distributed on servers of two different operating systems, e.g. LINUX and Windows.
  • The RECO process of a node automatically connects to other databases involved in an in-doubt distributed transaction.
  • When RECO reestablishes a connection between the databases, it automatically resolves all in-doubt transactions, removing from each database’s pending transaction table any rows that correspond to the resolved transactions.


Optional Background Processes


Optional Background Process Definition:

  • ARCn: Archiver – One or more archiver processes copy the online redo log files to archival storage when they are full or a log switch occurs.
  • CJQ0: Coordinator Job Queue – This is the coordinator of job queue processes for an instance. It monitors the JOB$ table (table of jobs in the job queue) and starts job queue processes (Jnnn) as needed to execute jobs The Jnnn processes execute job requests created by the DBMS_JOBS package.
  • Dnnn: Dispatcher number “nnn”, for example, D000 would be the first dispatcher process – Dispatchers are optional background processes, present only when the shared server configuration is used. Shared server is discussed in your readings on the topic “Configuring Oracle for the Shared Server”.
  • FBDA: Flashback Data Archiver Process – This archives historical rows of tracked tables into Flashback Data Archives. When a transaction containing DML on a tracked table commits, this process stores the pre-image of the rows into the Flashback Data Archive. It also keeps metadata on the current rows. FBDA automatically manages the flashback data archive for space, organization, and retention


Of these, you will most often use ARCn (archiver) when you automatically archive redo log file information (covered in a later module).





While the Archiver (ARCn) is an optional background process, we cover it in more detail because it is almost always used for production systems storing mission critical information.

  • The ARCn process must be used to recover from loss of a physical disk drive for systems that are “busy” with lots of transactions being completed.
  • It performs the tasks listed below.



When a Redo Log File fills up, Oracle switches to the next Redo Log File.

  • The DBA creates several of these and the details of creating them are covered in a later module.
  • If all Redo Log Files fill up, then Oracle switches back to the first one and uses them in a round-robin fashion by overwriting ones that have already been used.
  • Overwritten Redo Log Files have information that, once overwritten, is lost forever.



  • If ARCn is in what is termed ARCHIVELOG mode, then as the Redo Log Files fill up, they are individually written to Archived Redo Log Files.
  • LGWR does not overwrite a Redo Log File until archiving has completed.
  • Committed data is not lost forever and can be recovered in the event of a disk failure.
  • Only the contents of the SGA will be lost if an Instance fails.



  • The Redo Log Files are overwritten and not archived.
  • Recovery can only be made to the last full backup of the database files.
  • All committed transactions after the last full backup are lost, and you can see that this could cost the firm a lot of $$$.


When running in ARCHIVELOG mode, the DBA is responsible to ensure that the Archived Redo Log Files do not consume all available disk space!  Usually after two complete backups are made, any Archived Redo Log Files for prior backups are deleted.


Slave Processes


Slave processes are background processes that perform work on behalf of other processes.


Innn: I/O slave processes — simulate asynchronous I/O for systems and devices that do not support it. In asynchronous I/O, there is no timing requirement for transmission, enabling other processes to start before the transmission has finished.

  • For example, assume that an application writes 1000 blocks to a disk on an operating system that does not support asynchronous I/O.
  • Each write occurs sequentially and waits for a confirmation that the write was successful.
  • With asynchronous disk, the application can write the blocks in bulk and perform other work while waiting for a response from the operating system that all blocks were written.


Parallel Query Slaves In parallel execution or parallel processing, multiple processes work together simultaneously to run a single SQL statement.

  • By dividing the work among multiple processes, Oracle Database can run the statement more quickly.
  • For example, four processes handle four different quarters in a year instead of one process handling all four quarters by itself.
  • Parallel execution reduces response time for data-intensive operations on large databases such as data warehouses. Symmetric multiprocessing (SMP) and clustered system gain the largest performance benefits from parallel execution because statement processing can be split up among multiple CPUs. Parallel execution can also benefit certain types of OLTP and hybrid systems.



Logical Structure


It is helpful to understand how an Oracle database is organized in terms of a logical structure that is used to organize physical objects.



Tablespace:  An Oracle database must always consist of at least two tablespaces (SYSTEM and SYSAUX), although a typical Oracle database will multiple tablespaces.

  • A tablespace is a logical storage facility (a logical container) for storing objects such as tables, indexes, sequences, clusters, and other database objects.
  • Each tablespace has at least one physical datafile that actually stores the tablespace at the operating system level.A large tablespace may have more than one datafile allocated for storing objects assigned to that tablespace.
  • A tablespace belongs to only one database.
  • Tablespaces can be brought online and taken offline for purposes of backup and management, except for the SYSTEM tablespace that must always be online.
  • Tablespaces can be in either read-only or read-write status.


Datafile:  Tablespaces are stored in datafiles which are physical disk objects.

  • A datafile can only store objects for a single tablespace, but a tablespace may have more than one datafile – this happens when a disk drive device fills up and a tablespace needs to be expanded, then it is expanded to a new disk drive.
  • The DBA can change the size of a datafile to make it smaller or later.The file can also grow in size dynamically as the tablespace grows.


Segment:  When logical storage objects are created within a tablespace, for example, an employee table, a segment is allocated to the object.

  • Obviously a tablespace typically has many segments.
  • A segment cannot span tablespaces but can span datafiles that belong to a single tablespace.


Extent:  Each object has one segment which is a physical collection of extents.

  • Extents are simply collections of contiguous disk storage blocks. A logical storage object such as a table or index always consists of at least one extent – ideally the initial extent allocated to an object will be large enough to store all data that is initially loaded.
  • As a table or index grows, additional extents are added to the segment.
  • A DBA can add extents to segments in order to tune performance of the system.
  • An extent cannot span a datafile.


Block:  The Oracle Server manages data at the smallest unit in what is termed a block or data block.  Data are actually stored in blocks.


A physical block is the smallest addressable location on a disk drive for read/write operations.


An Oracle data block consists of one or more physical blocks (operating system blocks) so the data block, if larger than an operating system block, should be an even multiple of the operating system block size, e.g., if the Linux operating system block size is 2K or 4K, then the Oracle data block should be 2K, 4K, 8K, 16K, etc in size.  This optimizes I/O.


The data block size is set at the time the database is created and cannot be changed.  It is set with the DB_BLOCK_SIZE parameter.  The maximum data block size depends on the operating system.


Thus, the Oracle database architecture includes both logical and physical structures as follows:

  • Physical:Control files; Redo Log Files; Datafiles; Operating System Blocks.
  • Logical:Tablespaces; Segments; Extents; Data Blocks.



SQL Statement Processing


SQL Statements are processed differently depending on whether the statement is a query, data manipulation language (DML) to update, insert, or delete a row, or data definition language (DDL) to write information to the data dictionary.



Processing a query:

  • Parse:

o   Search for identical statement in the Shared SQL Area.

o   Check syntax, object names, and privileges.

o   Lock objects used during parse.

o   Create and store execution plan.

  • Bind: Obtains values for variables.
  • Execute: Process statement.
  • Fetch: Return rows to user process.


Processing a DML statement:

  • Parse: Same as the parse phase used for processing a query.
  • Bind: Same as the bind phase used for processing a query.
  • Execute:

o   If the data and undo blocks are not already in the Database Buffer Cache, the server process reads them from the datafiles into the Database Buffer Cache.

o   The server process places locks on the rows that are to be modified. The undo block is used to store the before image of the data, so that the DML statements can be rolled back if necessary.

o   The data blocks record the new values of the data.

o   The server process records the before image to the undo block and updates the data block.  Both of these changes are made in the Database Buffer Cache.  Any changed blocks in the Database Buffer Cache are marked as dirty buffers.  That is, buffers that are not the same as the corresponding blocks on the disk.

o   The processing of a DELETE or INSERT command uses similar steps.  The before image for a DELETE contains the column values in the deleted row, and the before image of an INSERT contains the row location information.


Processing a DDL statement:

  • The execution of DDL (Data Definition Language) statements differs from the execution of DML (Data Manipulation Language) statements and queries, because the success of a DDL statement requires write access to the data dictionary.
  • For these statements, parsing actually includes parsing, data dictionary lookup, and execution.Transaction management, session management, and system management SQL statements are processed using the parse and execute stages.  To re-execute them, simply perform another execute.





SAP Workload Analysis

Logging in:
While logging in, the presentation server connects with the dispatcher for allocating the work processes. When a user tries to run a transaction, the user’s request comes from the presentation server to the dispatcher is put into the local wait queue. When the dispatcher recognizes that the work process is free, it allocates the process to the user’s request taken from the wait queue.
‘Wait time: in milliseconds.’
This is the time when the user’s request sits in the dispatcher queue for allocation of work process. It starts when the user’s request is entered in the dispatcher queue and ends when a process is allocated for the request waiting in the queue.
‘User context data’
When a user is dispatched to a work process, the details such as user’s logon attributes, authorizations and other relevant information is transferred from the roll memory, extended memory or the roll file in to the work process. This transfer of user context data into work process is called as ‘Roll in’.
If data from the database is needed to support transaction processing, then the request for data is sent to the database interface, which in turn sends a request through the network to retrieve information from the database.
When a request is received, the database searches its shared memory buffers. If it is found, it is sent back to the work process. If the data is not found, then it is loaded from the disk into the shared memory buffers. After being located, the data is taken from the shared memory buffers and sent back across the network to the requesting database interface.
When transaction processing is completed, the dispatcher is notified of its completion. The work process then is no longer required; the user context data is rolled out of the work process.
CPU time
CPU time is the amount of time during which a particular work process has active control of the central processing unit.
Response time in milliseconds
Starts when a user request enters the dispatcher queue; ends when the next screen is returned to the user. The response time does not include the time to transfer from the screen to front end.
Roll in time in milliseconds
The amount of time needed to roll user context information into the work process.
Load time in milliseconds
The time needed to load from the database and generate objects like ABAP Source code, CUA and screen information.
Processing time
This is equivalent to response time minus sum of wait time, database request time, load time, roll time and enqueue time.
Database request time
Starts when a database request is put through the database interface; ends when the database interface has delivered the result.
General performance indicating factors: factors indicating good performance.
Wait time < 10% of response time.
Average roll in time < 20 milliseconds.
Average roll wait time < 200 ms.
Average load (& generation time) < 10% of response time(<50 ms)
Average database request time < 40% of (response time – wait time)
Average CPU time < 40% of (response time – wait time)
Average CPU time is not much less than processing time.
Average response time – Depends on customer requirements there is no general rule.
Problems in the above factors and reasons for their problems
Large roll – wait time -> Communication problem with GUI or external system
Large load time -> Program buffer, CUA buffer or screen buffer too small
Large database request times -> CPU/ memory bottleneck on database server, network problems, expensive SQL statements, database locks missing indexes, missing statistics, small buffers
Large CPU times -> Expensive ABAP processing, for example, processing large tables, frequent accessing of R/3 buffers
Processing time much larger than CPU time -> CPU bottlenecks, network problems, communication problems
R/3 Workload monitor (ST03N)
Problem: Wait time > 10% of response time!
Result: General performance problem.
Problem: High database time: database time > 40% of(response time – wait time)
Solution: Detailed analysis of the database.
Problem: Processing time > CPU time * 2
Solution: Detailed analysis of hardware bottlenecks.
Problem: Load time > 50 ms.
Solution: Detailed analysis of R/3 memory configuration (is the program buffer too small?)
Problem: Roll wait time or GUI time > 200 ms.
Solution: Detailed analysis of interfaces and GUI communication.
In the workload monitor, choosing transaction profile enables you to find out:
  • The most used transactions. Tuning these transactions creates the greatest improvements in the overall performance.
  • The average response times for typical R/3 transactions.
To access the statistical record of a specific server:
Transaction :   STAD after Release of 4.5
STAT before Release of 4.5
Transaction profile (Transaction ST03N) sorted by ‘Response time total’
Programs with high CPU time: CPU time > 40% (response time – wait time)
Detailed analysis with ABAP-TRACE (SE30)
Programs with high database time(database time > 40%(response time – wait time)
Detailed analysis of SQL Statements (ST05)
Problems with high GUI times (>200ms)
Solution: Network check
Workload Monitor
To display the 40 slowest dialog steps by response time, then choose Top time.
Under Goto -> profiles, you can access, for example:
Task type profile – Workload statistics according to work process type
Time profile – Workload statistics according to hour
Transaction profile – Workload statistics according to transaction
The proportion of database calls to database requests gives an indication of efficiency of table buffering. If access to information in a table is buffered in the R/3 pool buffers, then database calls to the database server are not needed and the performance is better. Thus fewer database calls result in database requests, the better.
Using transaction profile of ST03N, you find out:
Which transactions are used most? Tuning these transactions creates the greatest improvements in overall performance.

Functions of the SAP Memory Management System


You must be familiar with basic terminology related to memory management.

You can find a summary of the terms in Memory Management: Basic Terms.


An application runs in an SAP work process where an ABAP program is normally executed. The process requires memory to do this, which is allocated to the process by the memory management system. The order in which the work process is assigned the memory type depends on the work process type, either dialog or non-dialog (see SAP Memory Types), and the underlying operating system.

This is described in more detail in the documentation on the operating system.

The location of the various memory areas in the virtual address space is explained in Virtual Address Space of a Work Process.

The area of a user context that is directly accessible is now extended as needed, if the user context has expanded.For dialog work processes, the data of the user context, including internal tables is located in this expanded area.You can therefore access all the data in the user context. Only the data types “extract” and “export to memory remain in the SAP Paging.

The SAP Roll Area is used for the initial memory assigned to a user context, and (if available) for additional memory if the expanded memory is full.

The following diagram displays the memory types that can be assigned to R/3 work processes on the SAP and operating system level. Here are the most important system profile parameters that control the availability of the memory types.

Whenever a dialog step is executed, a roll action occurs between the roll buffer in the shared memory and the memory area, which is allocated according to ztta/roll_first in a dialog process. Then the area in the shared memory is accessed that belongs to this user context.

The following graphic displays the roll process performed by the dispatcher.

  • Roll-in cross-user data is rolled in from the common resource in the work process (and is processed there).
  • Roll-out: User-specific data is rolled out from the work process in the common resource (after the dialog step has ended).

The common resource stands for the different SAP memory types.


Using the VM Container

If the SAP Virtual Machine Container is active in your system, Java programs can also be executed. Memory management has been enhanced for this purpose, see Memory Management in the VM Container.



User Administration and Authentication

The SAP MI Server Component uses the SAP user administration of the SAP Web Application Server (SAP Web AS). SAP Mobile Infrastructure also supports single sign-on.

The following cases can occur during authentication and user management on the mobile device:

Authentication with User and Password

The user management of the SAP MI Client Component manages user IDs and local logon passwords. The local logon password is used for local user authentication. It is stored in coded form on the mobile device, and not in plain text. A second password, called the synchronization password, is used for synchronization with the SAP MI Server Component (SAP Web AS). The technical difference between the local logon password and the synchronization password allows you to scale security, see the section on Passwords.

You can change the passwords on the client side at any time. The data, however, can only be synchronized successfully if there are equivalent values for the user ID and the synchronization password for the SAP MI Client Component on the SAP MI Server Component. Users can change both passwords with the SAP MI Client Component, see Passwords in the SAP MI.

You can replicate user data in the SAP MI Server Component in the following ways:

  • With central SAP user administration

If you are using central user administration, you can use it to keep the user data in the SAP MI Server Component synchronous with that in the backend systems.

  • With Report WAF_DEPLOYMENT_FROM_ROLES (Activation with User Group MESYNC)

Use this report to keep user data synchronous in the SAP MI Server Component and in the backend systems. See Reports for Scheduling Background Jobs and Creating User Groups for Synchronization.

You cannot automatically replicate user data for the SAP MI Client Component. On the mobile device, the end user must manually keep the user ID synchronous with that on the server. The user of the SAP MI Client Component must be the same as that of the SAP MI Server Component.

Authentication with Single Sign-On

You can configure the SAP MI Client Component to support single sign-on (SSO) if the device is available with an online connection. The mobile device receives the SAP logon ticket from a system that issues tickets, e.g. from SAP Enterprise Portal. The mobile device can then be authenticated on the SAP MI Server Component with the SAP logon ticket without the user needing to enter an additional password. For authentication with single sign-on, the following requirements must be satisfied:

  • The SAP MI Server Component (SAP Web AS) is configured to support SAP logon tickets, see Authentication and Single Sign-On.
  • The JSP version of the SAP MI Client Component is installed on the mobile device and configured for single sign-on.
  • A Win32 operating system is installed on the mobile device.

The following scenarios can be configured when you use single sign-on, see Setting Up Single Sign-On on the Mobile Device:

  • One user – SAP MI-based

The device is used by a single user. The user starts the SAP MI Client Component on the mobile device. It requests a ticket that is used for the initial logon an for synchronization from the system that issues tickets.

In this scenario, users only need to enter the user and password for this system when they log onto the system issuing the ticket. Authentication on the SAP MI uses the SAP logon ticket only. Existing settings for password handling are therefore ignored in the SAP MI and there is no password management.

In the initial logon, which must be performed online, the user data of the logon ticket is used to create a user in the SAP MI Client Component.

  • One user – SAP MI access from a system issuing tickets, e.g. SAP Enterprise Portal

The device is used by a single user. Users start the SAP MI on their mobile device as a service that runs in the background without a user interface.

To work with the SAP MI, users start the interface of the SAP MI with a link, for example in SAP Enterprise Portal.

As a result of logging onto the system issuing tickets, there is already a logon ticket available if the user interface of the SAP MI was started. It need not be requested explicitly.

  • Multiple users

The device is used by multiple users. Users start the SAP MI on their mobile device as a service that runs in the background without a user interface.

To work with the SAP MI, users start the interface of the SAP MI with a link, for example in SAP Enterprise Portal.

If there is no ticket, users can start the SAP MI from the browser below the configured start address and log on with their user name and password. Settings already in the SAP MI for handling passwords are taken into consideration and password management is available in the SAP MI.

A user name and password must be created in the SAP MI Client Component before a user can use a SAP logon ticket.

Tools for User Administration

Tools for User Administration

Tool Function Related Information
User maintenance (Transaction SU01) Create users User Maintenance
User maintenance mass changes (TransactionSU10) Change multiple users at the same time Mass Changes
Role maintenance with the Profile Generator (Transaction PFCG) Create and edit roles Role Maintenance
Central SAP user administration Maintain all users Central User Administration
User groups (Transaction SUGR) Synchronize user groups Synchronizing User Groups
Report WAF_DEPLOYMENT_FROM_ROLES Report for synchronizing user data in the SAP MI Server Component and in the backend systems, see Reports for Scheduling Background Jobs


The following user types are needed, see Role Editing for Mobile Applications:

  • Technical users

¡        Batch users for replication

¡        Batch users for role comparison

¡        RFC users for connections to the backend systems (if you do not want to use the current logon user to connect to the backend system, see Communications Destinations.

¡        Service users for creating detailed error message texts if the server logon failed, see Configurating the Display of Messages for Logon Errors.

  • Individual users (for logging onto the backend systems and for using the synchronization function)

¡        Users in the backend systems

¡        Users with synchronization authorization for each mobile end user in the SAP MI Server Component

¡        Users on the mobile device corresponding to those on the server

¡        Administrators for the SAP Mobile Infrastructure Web Console

¡        Administrators for the Computing Center Management System (CCMS)

No users are delivered with the software.

System User Delivered? Type Default Password Detailed
SAP MI Client Component End user No Dialog No Installed by end users themselves
SAP MI Server Component End user No Dialog INIT if created with copy function Installed by administrator of SAP MI Web Console
SAP MI Server Component Administrators for the SAP MI Web Console No Dialog No Installed by superior user administrator
SAP MI Server Component Administrator for CCMS No Dialog No Installed by superior user administrator
SAP MI Server Component Administrator for Smart Synchronization No Dialog No Installed by superior user administrator
SAP MI Server Component Batch user for batch tasks No System or dialog No Installed by superior user administrator
SAP MI Server Component Service user for displaying detailed error message texts if server logon failed No System No Installed by superior user administrator
Backend End user No Dialog No Installed by administrator of backend system


Passwords (Without Single Sign-On)

When the administrator creates individual users for the SAP MI Server Component, the system generates a password for the initial logon. The end user then has to log onto the server(SAP Web AS) once directly and change the password, see Changing the Initial Password.

The SAP MI Client Component supports the technical difference between the synchronization password and the local logon password. The local logon password is used for offline authentication on the SAP MI Client Component. The synchronization password is used for online authentication on the SAP MI Server Component (SAP Web AS). The online authentication takes place at the beginning of the synchronization cycle. The user ID and the synchronization password are transferred to the server and verified there.

In configuration file mobileengine.config the administrator can define how the synchronization password and the local logon password should be handled; see Predefining and Setting Parameters for All Users. Possible values for parameter MobileEngine.Security.SynchronizationPasswordHandlingOption are:

  • atSync – Synchronization password does not correspond to the local logon password and must be entered for each synchronization (default value).
  • local – Synchronization password corresponds to the local logon password and need not be entered at synchronization.
  • once – Synchronization password does not correspond to the local logon password and must be entered once for each logon.

The synchronization option Timed Sync is not possible in combination with the setting atSync. It is only possible with the setting once after the end user has entered the synchronization password once, e.g. from the user settings. With the setting local, the synchronization option Timed Sync can be used without restrictions.

The SAP MI Client Component does not store the synchronization password for the settings atSync and once. Instead, the user must enter it for each synchronization or once per logon, depending on the setting.

The end user must manually synchronize the user ID and synchronization password on the mobile device with the settings used on the server. If multiple users are using the same mobile device, they all need their own user IDs and must keep the ID and synchronization password synchronous with the settings used on the server.

There are no restrictions for the minimum and maximum lengths of the atSync and once settings.

For general information about passwords, see Security Measures Related to Password Rules.

The SAP MI Client Component distinguishes between uppercase and lowercase.