Process wait SQL*Net message from dblink /SQL*Net message from client

Hi There,
We have an ETL process that we kindly need your help with. The process been running since Sun, where it transfers the data from one server (via remote query). The process was running ok till last night where it appeared
to have stopped working and/or the session is just idling doing nothing.
Here are some tests that we did to figure out what's going on:
1. when looking at the session IO, we noticed that it's not changing:
etl_user@datap> select sess_io.sid,
  2         sess_io.block_gets,
  3         sess_io.consistent_gets,
  4         sess_io.physical_reads,
  5         sess_io.block_changes,
  6         sess_io.consistent_changes
  7    from v$sess_io sess_io, v$session sesion
  8   where sesion.sid = sess_io.sid
  9     and sesion.username is not null
10     and sess_io.sid=301
11  order by 1;
                    logical   physical
  SID BLOCK_GETS      reads      reads BLOCK_CHANGES CONSISTENT_CHANGES
  301  388131317   97721268   26687579     223052804             161334
Elapsed: 00:00:00.012. Check there is nothing blocking the session
etl_user@datap> select * from v$lock where sid=301;
ADDR     KADDR           SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK
684703F0 6847041C        301 DX         35          0          1          0      45237          0
684714C4 684714F0        301 AE     199675          0          4          0     260148          0
619651EC 6196521C        301 TM      52733          0          3          0      45241          0
67F86ACC 67F86B0C        301 TX     458763      52730          6          0      45241          03. Check if the session is still valid:
etl_user@datap> select status from v$session where sid=301;
STATUS
ACTIVE4. Check if there is anything in long ops that has not completed:
etl_user@datap> SELECT SID, SERIAL#, opname, SOFAR, TOTALWORK,
  2      ROUND(SOFAR/TOTALWORK*100,2) COMPLETE, TIME_REMAINING/60
  3      FROM   V$SESSION_LONGOPS
  4      WHERE
  5      TOTALWORK != 0
  6      AND    SOFAR != TOTALWORK
  7     order by 1;
no rows selected
Elapsed: 00:00:00.005. Check if there is anything in long ops for the session:
etl_user@datap> r
  1* select SID,SOFAR,TOTALWORK,START_TIME,LAST_UPDATE_TIME,TIME_REMAINING,MESSAGE from V$SESSION_LONGOPS where sid=301
  SID      SOFAR  TOTALWORK START_TIM LAST_UPDA TIME_REMAINING MESSAGE
  301          0          0 22-JUL-12 22-JUL-12                Gather Table's Index Statistics: Table address_etl : 0 out of 0 Indexes done
Elapsed: 00:00:00.00This is a bit odd!! This particular step have actually completed successfully on the 22nd of July, and we don't know why it's still showing in long opps!? any ideas?
6. Looking at the sql and what's it actually doing:
etl_user@datap> select a.sid, a.value session_cpu, c.physical_reads,
  2  c.consistent_gets,d.event,
  3  d.seconds_in_wait
  4  from v$sesstat a,v$statname b, v$sess_io c, v$session_wait d
  5  where a.sid= &p_sid_number
  6  and b.name = 'CPU used by this session'
  7  and a.statistic# = b.statistic#
  8  and a.sid=c.sid
  9  and a.sid=d.sid;
Enter value for p_sid_number: 301
old   5: where a.sid= &p_sid_number
new   5: where a.sid= 301
             CPU   physical    logical                                   seconds
  SID       used      reads      reads EVENT                             waiting
  301    1966595   26687579   97721268 SQL*Net message from dblink         45792
Elapsed: 00:00:00.037. We looked at the remote DB where the data resides on, and we noticed that the remote session was also waiting on the db link:
SYS@destp> select a.sid, a.value session_cpu, c.physical_reads,
  2  c.consistent_gets,d.event,
  3  d.seconds_in_wait
  4  from v$sesstat a,v$statname b, v$sess_io c, v$session_wait d
  5  where a.sid= &p_sid_number
  6  and b.name = 'CPU used by this session'
  7  and a.statistic# = b.statistic#
  8  and a.sid=c.sid
  9  and a.sid=d.sid;
Enter value for p_sid_number: 388
old   5: where a.sid= &p_sid_number
new   5: where a.sid= 390
       SID SESSION_CPU PHYSICAL_READS CONSISTENT_GETS EVENT                                                    SECONDS_IN_WAIT
       390         136              0            7605 SQL*Net message from client                                        46101
SYS@destp>We have had an issue in the past where the connection was being dropped by the network when the process runs for few days, hence we have added the following to the sqlnet.ora and listener.ora files:
sqlnet.ora:
SQLNET.EXPIRE_TIME = 1
SQLNET.INBOUND_CONNECT_TIMEOUT = 6000
listener.ora:
INBOUND_CONNECT_TIMEOUT_LISTENER = 6000What else can we do and/or further investigate to work out the root cause of the problem, and may be help resolve this. We don't want to just stop and start the process again as it took few days already. We have
had a chat to the infrastructure team and they've assured us that there have been no network outages.
Also, the alert logs for both instances (local and remote) shows no errors what so ever!
Your input is highly appreciated.
Thanks
Edited by: rsar001 on Jul 25, 2012 10:22 AM

Ran the query on both local/remote db, and no rows returned:
etl_user@datap> SELECT DECODE(request,0,'Holder: ','Waiter: ')||vl.sid sess, status,
  2  id1, id2, lmode, request, vl.type
  3  FROM V$LOCK vl, v$session vs
  4  WHERE (id1, id2, vl.type) IN
  5  (SELECT id1, id2, type FROM V$LOCK WHERE request>0)
  6  and vl.sid = vs.sid
  7  ORDER BY id1, request
  8  /
no rows selected
Elapsed: 00:00:00.21

Similar Messages

  • SQL*Net message from client - huge wait in trace file

    Dear All,
    I am facing a performance issue in a particular operation ( which was completed in 21 Minutes earlier). Now the same operation takes more than 35 Minutes. I took a trace for those session ( 10046 level 12 trace ) and found Lot of waits in SQL*Net message from client.
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQLNet message from client 611927 10.00 1121.35*
    I copied only the highest wait in the tkprof output.
    And I found from the tkprof and even in raw trace file this event waits more time after excuting
    SELECT sysdate AS SERVERDATE from dual;
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net message to client 115 0.00 0.00
    SQLNet message from client 115 10.00 724.52*
    Please help me to find out why this wait taking long time, especially on the above query..
    Regards,
    Vinodh

    Vinodh Kumar wrote:
    Hi,
    This is what available in the trace file
    PARSING IN CURSOR #2 len=38 dep=0 uid=60 oct=3 lid=60 tim=7052598842 hv=3788189359 ad='7d844fa0'
    *"SELECT sysdate AS SERVERDATE FROM dual"*
    END OF STMT
    PARSE #2:c=0,e=12,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=7052598839
    BINDS #2:
    EXEC #2:c=0,e=42,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=7052599002
    WAIT #2: nam='SQL*Net message to client' ela= 1 driver id=1952673792 #bytes=1 p3=0 obj#=-1 tim=7052599058
    FETCH #2:c=0,e=15,p=0,cr=0,cu=0,mis=0,r=1,dep=0,og=1,tim=7052599110
    *** 2012-01-02 17:07:30.364
    WAIT #2: nam='SQL*Net message from client' *" ela= 10007957"* driver id=1952673792 #bytes=1 p3=0 obj#=-1 tim=7062607120Please find the last line WAIT -- in the complete trace after executing this query
    In awr report , this query taken less than a sec for more than 2000 executions.
    Regards,
    VinodhGood idea to check the raw trace file. It is important to notice that this particular wait event appears after the fetch of the result from the database. The client receives the SYSDATE from the database server, and then the client performs some sort of action for about 10 seconds before submitting its next request to the database. The SQL statement that immediately follows and immediately preceeds this section of the trace file might provide clues regarding what caused the delay, and where that delay resides in the client side code. Maybe a creative developer added a "sleep for 10 seconds" routine when intending to sleep for 10ms? Maybe the client CPU is close to 100% utilization?
    Charles Hooper
    http://hoopercharles.wordpress.com/
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • Getting SQL*Net more data from client waits when running a query through web based interface

    Hi, you all,
    We are having this weird behavior when running query through web based interface, we get a lot of "SQL*Net more data from client" waits, the OEM indicates that the current wait event is SQL*Net more data from client
    It's just a very simple query wich invokes a db link.
    When I execute the same query on any PL/SQL tool like toad or sql developer it works fine, but that query inside an application executed through a web based interface, it hangs for ever.
    Where can I start looking for the problem.
    We are working on a 3 Node RAC 11gr2, both databases are on the same RAC.
    Thanks.

    Hi ,
    we managed to reproduce the case in test environment, below are the steps:
    1)have 2 databases on different machines, will call the first one local, the other one remote.
    2)in the local database create:
    a - DBLink to remote database.
    b - read data from remote database(we simply used select count(*) from dummy_table )
    c - insert data into a table on the local database
    d - terminate the connection between the 2 databases (disconnect either machine from the network)
    e - commit on local database.
    what we noticed was the following:
    1)when the local database is disconnected from the network(the machine is not connected to any network at the moment): almost immediately throws an error, and issuing the following:
    select * from dba_2pc_pending;we found some data .
    2) when the remote database was disconnected(the local database is still connected to the network):
    after 7-8 seconds an error is thrown, and issuing the following:
    select * from dba_2pc_pending;did not return any data.
    since this is pretty similar to our case ,we concluded that it's a network issue.
    is this the correct behavior ?
    as a temporary solution till the network issue is fixed ,we did the following:
    1) changed the call of the remote procedure to calling a local procedure that calls the remote procedure.
    2) added pragma autonomous_transaction to the local procedure.
    3) at the end of the local procedure rollback the autonomous transaction.
    it seems that since the global transaction does not use the DBLink database does not issue a 2PC commit.
    this works in my cases since the DBLink is only issed to read data.

  • SQL that shows which client process is connected to which server process

    Hi,
    I am running database on Linux.
    I have many oraclePROD processes (up to 100). These are dedicated server processes, which are opened by several client processes.
    I want to know which server process is connected to which client process. I see that v$session gives the session# and client process, but does not give information about the related dedicated server process.
    Is there any v$ view that gives this information?
    Thank you.

    user12952237 wrote:
    Hi,
    I am running database on Linux.
    I have many oraclePROD processes (up to 100). These are dedicated server processes, which are opened by several client processes.
    I want to know which server process is connected to which client process. I see that v$session gives the session# and client process, but does not give information about the related dedicated server process.
    Is there any v$ view that gives this information?
    Thank you.
      1* select process from sys.v_$session where username = 'USER1'
    SQL> /
    PROCESS
    29711
    SQL> !ps -ef | grep sqlplus
    bcm      29711  1980  0 08:04 pts/0    00:00:00 sqlplus
    bcm      29761 29711  0 08:07 pts/0    00:00:00 /bin/bash -c ps -ef | grep sqlplus
    bcm      29763 29761  0 08:07 pts/0    00:00:00 grep sqlplus
    SQL>

  • I just order 8 calendars from iPhoto and they came to me fine. Now I need to order two more but when I go thru the process I get a message  saying:unable to assemble calendar. There is a probleme with the photo with the file name"(Null)"   more........ .

    Would someone be able to explain to me the following issue with Iphoto?
    I ordered 8 same calendars for my soccer team and received them fine. Although a couple of pictures on it are a little off (out of focus). I need to order two more of the same calendars but when I go thru the process ireceive an error message saying:
    "Unable to to assemble  calendar" There is a problem with the photo with the file name "(Null)" The full resolution version of this photo either cannot be located or is corrupt. Please replace this photo or delete it from your calendar.
    How can  I fine this "corrupt" photo? How did it go thru with the first batch of calendars but won't go thru now?
    Thank you for your help.   

    Apply the two fixes below in order as needed:
    Fix #1
    Launch iPhoto with the Command+Option keys held down and rebuild the library.
    Since only one option can be run at a time start
    with Option #4 and then #1 as needed.
    Fix #2
    Using iPhoto Library Manager  to Rebuild Your iPhoto Library
    1 - download iPhoto Library Manager and launch.
    2 - click on the Add Library button, navigate to your Home/Pictures folder and select your iPhoto Library folder.
    3 - Now that the library is listed in the left hand pane of iPLM, click on your library and go to the File ➙ Rebuild Library menu option.
    4 - In the next  window name the new library and select the location you want it to be placed.
    5 - Click on the Create button.
    Note: This creates a new library based on the LIbraryData.xml file in the library and will recover Events, Albums, keywords, titles and comments.  However, books, calendars, cards and slideshows will be lost. The original library will be left untouched for further attempts at fixing the problem or in case the rebuilt library is not satisfactory.
    OT

  • Can anyone guide me to ideas on basics of parallel processing in SQL

    Hi all,
    Can anyone guide me to ideas on basics of parallel processing in SQL and its usage in the the performance tuning of the query. If so what's the syntax to be followed and how to arrive at the optimized query after tuning

    My 2'c on the subject.
    Don't break your head over parallel query (PQ) processing. It should be something that is of concern to the DBA/Oracle architect - not the developer.
    Yes, it is good to be aware of how it works. But you should not write hints in your code that forces PQ processing.
    Simple example. You use hints to force a PQ of degrees 5 (meaning 5 PQ processes will be used). It works great on development. In production, 10 users are running that query round about the same time. The PQ ceiling is 20 PQ processes. The 1st 4 users each gets 5 PQs processes and the remaining 6 get none. Or another developer did the same thing, only he was very greedy and coded a PQ of degrees 20 into his SQL. So his SQL is now consuming all available PQ processes. So how did forcing PQ in your code addressed performance? It did just for a couple of users with the majority of users now facing even worse performance.
    The DBA is the one that should be tuning this. He/she can set the degree of PQ per table. Control the size of the PQ etc.
    And it does not stop there. The primary reason for PQ in Oracle is to lower the latency of I/O. Let's say Oracle needs to reads 100MB worth of data (full table scan or fast full index scan for example). It uses multi block (sequential) reads to read bigger chunks at a time. Still, a single process can only read I/O so fast - the speed of which is entirely dependant on the I/O subsystem and hardware.
    Using a second process to assist with the I/O can reduce the overall time. Instead of 1 process reading 100MB data, there are now 2 processes only having to read 50MB each.
    But as I mentioned, the actual I/O thruput is a lower level function. Let's say you start a 100 processes. Great - each only have to read 1MB worth of data. Response should be fast.
    Wrong. Those 100 processes seriously overload the I/O subsystem and throttle it so badly that the complete platform's performance is degraded severely. So instead of these 100 process speeding up I/O performance, they trash the performance of the entire platform.
    Sure, this is an extreme example. But the basic concepts are usually illustrated much better using such an example.
    So, you as developer deciding on just whether to use PQ and just how many PQ processes to use...? Wrong. It is not your decision, not your area of responsibility and usually not your area of expertise.
    Know what PQ is. Know how PQ works. But think long and hard before forcing PQ via your code (using hints) on a production platform.

  • TCP connection closed but a child process of SQL Server may be holding a duplicate of the connection's socket in SQL2008R2

    Hello,
    I do get the below SQL error in production environment intermittently:
    TCP connection closed but a child process of SQL Server may be holding a duplicate of the connection's socket.  Consider enabling the TcpAbortiveClose SQL Server registry setting and restarting SQL Server. If the problem persists, contact Technical
    Support.
    According to the post I search in MSDN, the above error is fixed in SQL2008R-CU6, but I have SQL2008R2-SP02 CU09 patch in production environment and the above error still occurred intermittently. I am running SQL2008R2 SP02 CU09 patch with Windows 2008R2-SP01.
    I would like to know if anyone has  the same error happened to their SQL environment after applied SQL2008R2-SP02 CU06 patch and later.
    Any suggestion would be helpful.
    Best regards,
    PL.

    Hello,
    What happen if you apply the changes on the registry explained on the workaround section of the following article?
    http://support.microsoft.com/kb/2491214
    Hope this helps.
    Regards,
    Alberto Morillo
    SQLCoffee.com

  • How the SQL Query Parsing is processing inside SQL/PLSQL engine?

    Hi all,
    Can you explain how the SQL Query Parsing is processing inside SQL/PLSQL engine?
    Thanks,
    Sankar

    Sankar,
    Oracle Database concepts - Chapter 24..
    You will find the explanation required under the heading parsing.
    http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14220/sqlplsql.htm

  • How to monitor SM50 spool work process wait time from RZ20

    Hello All,
    I want to monitor the SM50 spool work process wait time from RZ20 (like dialog response time from rz20). Please help me to how to do this.
    Regards
    Subbu

    Hi Subbu,
    You may refer SAP help on CCMS to get the list of MTEs to be configured for spool monitoring.
    MTE
    Description
    SpoolNumbers
    Spool numbers that every output request is assigned
    UsedNumbers
    Percentage usage of the spool numbers; you must delete old output requests before this number reaches 100 percent
    Status
    Is only displayed if the spool service belongs to an SAP application server
    WaitTime
    Wait time in the spool service in seconds
    Monitoring the Spool System (SAP Library - SAP Printing Guide (BC-CCM-PRN))
    Hope this helps.
    Regards,
    Deepak Kori

  • When trying to install an Adobe extention, during the installation process I get a message that says "This extention cannot cannot be installed-----requires version 13.0 or greater."  I am installing from within CC

    When trying to install an Adobe extention, during the installation process I get this message-----"This extention cannot be installed----requires version 13.0 or greater."  I am installing through Photoshop CC------appreciate any help.

    Hi Gary.  Extensions have to be written for the specific Photoshop versions they are designed to run on.and definitely don't all run on every version of Photoshop.  You'd think that you'd be OK with that message, but it might be that it works fine on CS6 (13) but not on CC (14).
    But it could be an error.
    What is the extension you are trying to install?  Is Photoshop, and Extension Manager fully up to date?  If you are sure that the extension runs on CC, I'd try un installing, and reinstalling the Extension Manager.  I have had to exactly that a while back.

  • Application waits for a responce from the DB ...

    Hi friends,
    I have a multi threaded .NET application that interacts with ORACLE 10g DB.
    When the multi thread is executing for less than 4hrs continuously there is no problem, but if it continuous execution for more than 5hrs the application doesn't respond.
    To track the problem i have followed the standard logging process i.e. to a file. When i analyzed the file i could see the application sending a request to the DB and waits for the reply, but the server doesn't reply.
    One thing to be looked in is that the same piece of code was execution for 5hrs.
    I have no idea of this behavior, some one please help me out ...

    Hi Pierre Forstmann,
    I am not a ORACLE DBA expert but managed to get some information,
    Error message in database alert log file.
    Alert_<DB_NAME>.log
    ORA-1654: unable to extend index SYSMAN.MGMT_SYSTEM_PERF_LOG_IDX_01 by 8 in tablespace SYSAUX
    There are no queries waiting on DBA_WAITERS or DBA_BLOCKERS.
    The application is executing PL/SQL statement (Procedure, Functions).
    Four tables are related to this process.
    Initial Count
    T1 - 0 Records
    T2 - 0 Records
    T3 - 0 Records
    T4 - 0 Records
    Count after 4-5 hrs of execution
    T1 - 38269 Records
    T2 - 654818 Records
    T3 - 992579 Records
    T4 - 4422 Records
    I have identified the session that is used by my code from V$SESSION.
    Initially when i issued the below select statement:
    SELECT status,state,wait_time,seconds_in_wait FROM V$SESSION WHERE USERNAME = 'XXX';
    STATUS;STATE;WAIT_TIME;SECONDS_IN_WAIT
    INACTIVE;WAITING;0;1377
    ACTIVE;WAITED SHORT TIME;-1;2
    ACTIVE;WAITED SHORT TIME;-1;0
    INACTIVE;WAITING;0;1
    INACTIVE;WAITING;0;1377
    INACTIVE;WAITING;0;1377
    INACTIVE;WAITING;0;1377
    INACTIVE;WAITING;0;1377
    INACTIVE;WAITING;0;4
    INACTIVE;WAITING;0;1377
    Yes, The data in V$SESSION changes over time.
    After 5 Min interval
    INACTIVE;WAITING;0;1496
    ACTIVE;WAITED SHORT TIME;-1;2
    ACTIVE;WAITED SHORT TIME;-1;5
    INACTIVE;WAITING;0;2
    INACTIVE;WAITING;0;1496
    INACTIVE;WAITING;0;1496
    INACTIVE;WAITING;0;1496
    INACTIVE;WAITING;0;1496
    INACTIVE;WAITING;0;0
    INACTIVE;WAITING;0;1496
    After 2 Min interval
    INACTIVE;WAITING;0;1452
    INACTIVE;WAITING;0;0
    INACTIVE;WAITING;0;0
    INACTIVE;WAITING;0;0
    INACTIVE;WAITING;0;1452
    INACTIVE;WAITING;0;1452
    INACTIVE;WAITING;0;1452
    INACTIVE;WAITING;0;1452
    INACTIVE;WAITING;0;5
    INACTIVE;WAITING;0;1452
    After 1 Min interval
    INACTIVE;WAITING;0;1751
    ACTIVE;WAITED KNOWN TIME;2;0
    ACTIVE;WAITED KNOWN TIME;1;1
    INACTIVE;WAITING;0;7
    INACTIVE;WAITING;0;1751
    INACTIVE;WAITING;0;1751
    INACTIVE;WAITING;0;1751
    INACTIVE;WAITING;0;1751
    INACTIVE;WAITING;0;1
    INACTIVE;WAITING;0;1751
    After 4-5 hrs of execution and when the application was waiting for the reply from the DB server i issued the same select statement.
    Result is : No rows returned.
    I would like to know what would be the reason that all the 10 session have disappeared from V$SESSION.

  • What are the major process to transfer the data from legacy to sap system.

    What are the major process to transfer the data from legacy to sap system using BDC at Real Time only?

    hi,
    BATCH DATA COMMUNICATION
    main methods are:
    1. SESSION METHOD
    2. CALL TRANSACTION
    3. DIRECT INPUT
    Advantages offered by BATCH INPUT method:
    1. Can process large data volumes in batch.
    2. Can be planned and submitted in the background.
    3. No manual interaction is required when data is transferred.
    4. Data integrity is maintained as whatever data is transferred to the table is through transaction. Hence batch input data is submitted to all the checks and validations.
    To implement one of the supported data transfers, you must often write the program that exports the data from your non-SAP system. This program, known as a “data transfer” program must map the data from the external system into the data structure required by the SAP batch input program.
    The batch input program must build all of the input to execute the SAP transaction.
    Two main steps are required:
    • To build an internal table containing every screen and every field to be filled in during the execution of an SAP transaction.
    • To pass the table to SAP for processing.
    Prerequisite for Data Transfer Program
    Writing a Data Transfer Program involves following prerequisites:
    Analyzing data from local file
    Analyzing transaction
    Analyzing transaction involves following steps:
    • The transaction code, if you do not already know it.
    • Which fields require input i.e., mandatory.
    • Which fields can you allow to default to standard values.
    • The names, types, and lengths of the fields that are used by a transaction.
    • Screen number and Name of module pool program behind a particular transaction.
    To analyze a transaction::
    • Start the transaction by menu or by entering the transaction code in the command box.
    (You can determine the transaction name by choosing System – Status.)
    • Step through the transaction, entering the data will be required for processing your batch input data.
    • On each screen, note the program name and screen (dynpro) number.
    (dynpro = dyn + pro. Dyn = screen, pro = number)
    • Display these by choosing System – Status. The relevant fields are Program (dynpro) and Dynpro number. If pop-up windows occur during execution, you can get the program name and screen number by pressing F1 on any field or button on the screen.
    The technical info pop-up shows not only the field information but also the program and screen.
    • For each field, check box, and radio button on each screen, press F1 (help) and then choose Technical Info.
    Note the following information:
    - The field name for batch input, which you’ll find in its own box.
    - The length and data type of the field. You can display this information by double clicking on the Data Element field.
    • Find out the identification code for each function (button or menu) that you must execute to process the batch-input data (or to go to new screen).
    Place the cursor on the button or menu entry while holding down the left mouse button. Then press F1.
    In the pop-up window that follows, choose Technical info and note the code that is shown in the Function field.
    You can also run any function that is assigned to a function key by way of the function key number. To display the list of available function keys, click on the right mouse button. Note the key number that is assigned to the functions you want to run.
    Once you have program name, screen number, field name (screen field name), you can start writing.
    DATA TRANSFER program.
    Declaring internal table
    First Integral Table similar to structure like local file.
    Declaring internal table like BDCDATA
    The data from internal table is not transferred directly to database table, it has to go through transaction. You need to pass data to particular screen and to particular screen-field. Data is passed to transaction in particular format, hence there is a need for batch input structure.
    The batch input structure stores the data that is to be entered into SAP system and the actions that are necessary to process the data. The batch input structure is used by all of the batch input methods. You can use the same structure for all types of batch input, regardless of whether you are creating a session in the batch input queue or using CALL TRANSACTION.
    This structure is BDCDATA, which can contain the batch input data for only a single run of a transaction. The typical processing loop in a program is as follows:
    • Create a BDCDATA structure
    • Write the structure out to a session or process it with CALL TRANSACTION USING; and then
    • Create a BDCDATA structure for the next transaction that is to be processed.
    Within a BDCDATA structure, organize the data of screens in a transaction. Each screen that is processed in the course of a transaction must be identified with a BDCDATA record. This record uses the Program, Dynpro, and Dynbegin fields of the structure.
    The screen identifier record is followed by a separate BDCDATA record for each value, to be entered into a field. These records use the FNAM and FVAL fields of the BDCDATA structure. Values to be entered in a field can be any of the following:
    • Data that is entered into screen fields.
    • Function codes that are entered into the command field. Such function codes execute functions in a transaction, such as Save or Enter.
    The BDCDATA structure contains the following fields:
    • PROGRAM: Name of module pool program associated with the screen. Set this field only for the first record for the screen.
    • DYNPRO: Screen Number. Set this field only in the first record for the screen.
    • DYNBEGIN: Indicates the first record for the screen. Set this field to X, only for the first record for the screen. (Reset to ‘ ‘ (blank) for all other records.)
    • FNAM: Field Name. The FNAM field is not case-sensitive.
    • FVAL: Value for the field named in FNAM. The FVAL field is case-sensitive. Values assigned to this field are always padded on the right, if they are less than 132 characters. Values must be in character format.
    Transferring data from local file to internal table
    Data is uploaded to internal table by UPLOAD of WS_UPLOAD function.
    Population of BDCDATA
    For each record of internal table, you need to populate Internal table, which is similar to BDCDATA structure.
    All these five initial steps are necessary for any type of BDC interface.
    DATA TRANSFER program can call SESSION METHOD or CALL TRANSACTION. The initial steps for both the methods are same.
    First step for both the methods is to upload the data to internal table. From Internal Table, the data is transferred to database table by two ways i.e., Session method and Call transaction.
    SESSION METHOD
    About Session method
    In this method you transfer data from internal table to database table through sessions.
    In this method, an ABAP/4 program reads the external data that is to be entered in the SAP System and stores the data in session. A session stores the actions that are required to enter your data using normal SAP transaction i.e., Data is transferred to session which in turn transfers data to database table.
    Session is intermediate step between internal table and database table. Data along with its action is stored in session i.e., data for screen fields, to which screen it is passed, the program name behind it, and how the next screen is processed.
    When the program has finished generating the session, you can run the session to execute the SAP transactions in it. You can either explicitly start and monitor a session or have the session run in the background processing system.
    Unless session is processed, the data is not transferred to database table.
    BDC_OPEN_GROUP
    You create the session through program by BDC_OPEN_GROUP function.
    Parameters to this function are:
    • User Name: User name
    • Group: Name of the session
    • Lock Date: The date on which you want to process the session.
    • Keep: This parameter is passed as ‘X’ when you want to retain session after
    processing it or ‘ ‘ to delete it after processing.
    BDC_INSERT
    This function creates the session & data is transferred to Session.
    Parameters to this function are:
    • Tcode: Transaction Name
    • Dynprotab: BDC Data
    BDC_CLOSE_GROUP
    This function closes the BDC Group. No Parameters.
    Some additional information for session processing
    When the session is generated using the KEEP option within the BDC_OPEN_GROUP, the system always keeps the sessions in the queue, whether it has been processed successfully or not.
    However, if the session is processed, you have to delete it manually. When session processing is completed successfully while KEEP option was not set, it will be removed automatically from the session queue. Log is not removed for that session.
    If the batch-input session is terminated with errors, then it appears in the list of INCORRECT session and it can be processed again. To correct incorrect session, you can analyze the session. The Analysis function allows to determine which screen and value has produced the error. If you find small errors in data, you can correct them interactively, otherwise you need to modify batch input program, which has generated the session or many times even the data file.
    CALL TRANSACTION
    About CALL TRANSACTION
    A technique similar to SESSION method, while batch input is a two-step procedure, Call Transaction does both steps online, one after the other. In this method, you call a transaction from your program by
    Call transaction <tcode> using <BDCTAB>
    Mode <A/N/E>
    Update <S/A>
    Messages into <MSGTAB>.
    Parameter – 1 is transaction code.
    Parameter – 2 is name of BDCTAB table.
    Parameter – 3 here you are specifying mode in which you execute transaction
    A is all screen mode. All the screen of transaction are displayed.
    N is no screen mode. No screen is displayed when you execute the transaction.
    E is error screen. Only those screens are displayed wherein you have error record.
    Parameter – 4 here you are specifying update type by which database table is updated.
    S is for Synchronous update in which if you change data of one table then all the related Tables gets updated. And sy-subrc is returned i.e., sy-subrc is returned for once and all.
    A is for Asynchronous update. When you change data of one table, the sy-subrc is returned. And then updating of other affected tables takes place. So if system fails to update other tables, still sy-subrc returned is 0 (i.e., when first table gets updated).
    Parameter – 5 when you update database table, operation is either successful or unsuccessful or operation is successful with some warning. These messages are stored in internal table, which you specify along with MESSAGE statement. This internal table should be declared like BDCMSGCOLL, a structure available in ABAP/4. It contains the following fields:
    1. Tcode: Transaction code
    2. Dyname: Batch point module name
    3. Dynumb: Batch input Dyn number
    4. Msgtyp: Batch input message type (A/E/W/I/S)
    5. Msgspra: Batch input Lang, id of message
    6. Msgid: Message id
    7. MsgvN: Message variables (N = 1 - 4)
    For each entry, which is updated in database, table message is available in BDCMSGCOLL. As BDCMSGCOLL is structure, you need to declare a internal table which can contain multiple records (unlike structure).
    Steps for CALL TRANSACTION method
    1. Internal table for the data (structure similar to your local file)
    2. BDCTAB like BDCDATA
    3. UPLOAD or WS_UPLOAD function to upload the data from local file to itab. (Considering file is local file)
    4. Loop at itab.
    Populate BDCTAB table.
    Call transaction <tcode> using <BDCTAB>
    Mode <A/N/E>
    Update <S/A>.
    Refresh BDCTAB.
    Endloop.
    (To populate BDCTAB, You need to transfer each and every field)
    The major differences between Session method and Call transaction are as follows:
    SESSION METHOD CALL TRANSACTION
    1. Data is not updated in database table unless Session is processed. Immediate updation in database table.
    2. No sy-subrc is returned. Sy-subrc is returned.
    3. Error log is created for error records. Errors need to be handled explicitly
    4. Updation in database table is always synchronous Updation in database table can be synchronous Or Asynchronous.
    Error Handling in CALL TRANSACTION
    When Session Method updates the records in database table, error records are stored in the log file. In Call transaction there is no such log file available and error record is lost unless handled. Usually you need to give report of all the error records i.e., records which are not inserted or updated in the database table. This can be done by the following method:
    Steps for the error handling in CALL TRANSACTION
    1. Internal table for the data (structure similar to your local file)
    2. BDCTAB like BDCDATA
    3. Internal table BDCMSG like BDCMSGCOLL
    4. Internal table similar to Ist internal table
    (Third and fourth steps are for error handling)
    5. UPLOAD or WS_UPLOAD function to upload the data from the local file to itab. (Considering file is local file)
    6. Loop at itab.
    Populate BDCTAB table.
    Call transaction <tr.code> using <Bdctab>
    Mode <A/N/E>
    Update <S/A>
    Messages <BDCMSG>.
    Perform check.
    Refresh BDCTAB.
    Endloop.
    7 Form check.
    IF sy-subrc <> 0. (Call transaction returns the sy-subrc if updating is not successful).
    Call function Format_message.
    (This function is called to store the message given by system and to display it along with record)
    Append itab2.
    Display the record and message.
    DIRECT INPUT
    About Direct Input
    In contrast to batch input, this technique does not create sessions, but stores the data directly. It does not simulate the online transaction. To enter the data into the corresponding database tables directly, the system calls a number of function modules that execute any necessary checks. In case of errors, the direct input technique provides a restart mechanism. However, to be able to activate the restart mechanism, direct input programs must be executed in the background only. Direct input checks the data thoroughly and then updates the database directly.
    You can start a Direct Input program in two ways;
    Start the program directly
    This is the quickest way to see if the program works with your flat file. This option is possible with all direct input programs. If the program ends abnormally, you will not have any logs telling you what has or has not been posted. To minimize the chance of this happening, always use the check file option for the first run with your flat file. This allows you to detect format errors before transfer.
    Starting the program via the DI administration transaction
    This transaction restarts the processing, if the data transfer program aborts. Since DI document are immediately posted into the SAP D/B, the restart option prevents the duplicate document posting that occurs during a program restart (i.e., without adjusting your flat file).
    Direct input is usually done for standard data like material master, FI accounting document, SD sales order and Classification for which SAP has provided standard programs.
    First time you work with the Direct Input administration program, you will need to do some preparation before you can transfer data:
    - Create variant
    - Define job
    - Start job
    - Restart job
    Common batch input errors
    - The batch input BDCDATA structure tries to assign values to fields which do not exist in the current transaction screen.
    - The screen in the BDCDATA structure does not match the right sequence, or an intermediate screen is missing.
    - On exceptional occasions, the logic flow of batch input session does not exactly match that of manual online processing. Testing the sessions online can discover by this.
    - The BDCDATA structure contains fields, which are longer than the actual definition.
    - Authorization problems.
    RECORDING A BATCH INPUT
    A B recording allows you to record a R/3 transaction and generate a program that contains all screens and field information in the required BDC-DATA format.
    You can either use SHDB transaction for recording or
    SYSTEM ? SERVICES ? BATCH INPUT ? EDIT
    And from here click recording.
    Enter name for the recording.
    (Dates are optional)
    Click recording.
    Enter transaction code.
    Enter.
    Click Save button.
    You finally come to a screen where, you have all the information for each screen including BDC_OKCODE.
    • Click Get Transaction.
    • Return to BI.
    • Click overview.
    • Position the cursor on the just recorded entry and click generate program.
    • Enter program name.
    • Click enter
    The program is generated for the particular transaction.
    BACKGROUND PROCESSING
    Need for Background processing
    When a large volume of data is involved, usually all batch inputs are done in background.
    The R/3 system includes functions that allow users to work non-interactively or offline. The background processing systems handle these functions.
    Non-interactively means that instead of executing the ABAP/4 programs and waiting for an answer, user can submit those programs for execution at a more convenient planned time.
    There are several reasons to submit programs for background execution.
    • The maximum time allowed for online execution should not exceed 300 seconds. User gets TIMEOUT error and an aborted transaction, if time for execution exceeds 300 seconds. To avoid these types of error, you can submit jobs for background processing.
    • You can use the system while your program is executing.
    This does not mean that interactive or online work is not useful. Both type of processing have their own purposes. Online work is the most common one entering business data, displaying information, printing small reports, managing the system and so on. Background jobs are mainly used for the following tasks; to process large amount of data, to execute periodic jobs without human intervention, to run program at a more convenient, planned time other than during normal working hours i.e., Nights or weekends.
    The transaction for background processing is SM36.
    Or
    Tools ? Administration ? Jobs ? Define jobs
    Or
    System ? services ? Jobs
    Components of the background jobs
    A job in Background processing is a series of steps that can be scheduled and step is a program for background processing.
    • Job name. Define the name of assigned to the job. It identifies the job. You can specify up to 32 characters for the name.
    • Job class. Indicates the type of background processing priority assigned to the job.
    The job class determines the priority of a job. The background system admits three types of job classes: A B & C, which correspond to job priority.
    • Job steps. Parameters to be passed for this screen are as follows:
    Program name.
    Variant if it is report program
    Start criteria for the job: Option available for this are as follows:
    Immediate - allows you to start a job immediately.
    Date/Time - allows you to start a job at a specific name.
    After job - you can start a job after a particular job.
    After event - allows you to start a job after a particular event.
    At operation mode - allows you to start a job when the system switches to a particular operation mode.
    Defining Background jobs
    It is two step process: Firstly, you define the job and then release it.
    When users define a job and save it, they are actually scheduling the report i.e., specifying the job components, the steps, the start time.
    When users schedule program for background processing, they are instructing the system to execute an ABAP/4 report or an external program in the background. Scheduled jobs are not executed until they are released. When jobs are released, they are sent for execution to the background processing system at the specified start time. Both scheduling and releasing of jobs require authorizations.
    HANDLING OF POP UP SCREEN IN BDC
    Many times in transaction pop up screen appears and for this screen you don’t pass any record but some indication to system telling it to proceed further. For example: The following screen
    To handle such screen, system has provided a variable called BDC_CURSOR. You pass this variable to BDCDATA and process the screen.
    Usually such screen appears in many transactions, in this case you are just passing information, that YES you want to save the information, that means YES should be clicked. So you are transferring this information to BDCDATA i.e., field name of YES which is usually SPOT_OPTION. Instead of BDC_OKCODE, you are passing BDC_CURSOR.
    BDC_CURSOR is also used to place cursor on particular field.
    A simple transaction where you are entering customer number on first screen and on next screen data is displayed for the particular customer number. Field, which we are changing here, are name and city. When you click on save, the changed record gets saved.
    Prerequisite to write this BDC interface as indicated earlier is:
    1. To find screen number
    2. To find screen field names, type of the field and length of the field.
    3. To find BDC_OKCODE for each screen
    4. Create flat file.
    generally  Batch Input usually are used to transfer large amount of data. For example you are implementing a new SAP project, and of course you will need some data transfer from legacy system to SAP system.
    CALL TRANSACTION is used especially for integration actions between two SAP systems or between different modules. Users sometimes wish to do something like that click a button or an item then SAP would inserts or changes data automatically. Here CALL TRANSACTION should be considered.
    2. Transfer data for multiple transactions usually the Batch Input method is used.
    check these sites for step by step process:
    For BDC:
    http://myweb.dal.ca/hchinni/sap/bdc_home.htm
    https://www.sdn.sap.com/irj/sdn/wiki?path=/display/home/bdc&
    http://www.sap-img.com/abap/learning-bdc-programming.htm
    http://www.sapdevelopment.co.uk/bdc/bdchome.htm
    http://www.sap-img.com/abap/difference-between-batch-input-and-call-transaction-in-bdc.htm
    http://help.sap.com/saphelp_47x200/helpdata/en/69/c250684ba111d189750000e8322d00/frameset.htm
    http://www.sapbrain.com/TUTORIALS/TECHNICAL/BDC_tutorial.html
    Check these link:
    http://www.sap-img.com/abap/difference-between-batch-input-and-call-transaction-in-bdc.htm
    http://www.sap-img.com/abap/question-about-bdc-program.htm
    http://www.itcserver.com/blog/2006/06/30/batch-input-vs-call-transaction/
    http://www.planetsap.com/bdc_main_page.htm
    call Transaction or session method ?
    null

  • Wait for unread message on broadcast channel during import

    Hi All,
    I am trying to import a dump file on Oracle DB, Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bi.
    My import hangs, below is the log.
    Import: Release 10.2.0.3.0 - 64bit Production on Tuesday, 03 April, 2012 21:09:20
    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    With the Partitioning and Data Mining options
    Master table "CIP_USER_PED1"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
    Starting "CIP_USER_PED1"."SYS_IMPORT_FULL_01": ****@** remap_schema=**** EXCLUDE=STATISTICS
    Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
    Processing object type SCHEMA_EXPORT/TABLE/TABLE
    Processing object type SCHEMA_EXPORT/VIEW/VIEW
    It doesn't proceed further.
    I am not seeing and sessions blocked or locked but I see session waits.
    SESSION_TYPE     STATUS     SECONDS_IN_WAIT     WAIT_CLASS     EVENT
    MASTER     ACTIVE     468     Idle     wait for unread message on broadcast channel
    WORKER     ACTIVE     467     User I/O     direct path read
    select d.session_type, status, seconds_in_wait, wait_class , event from v$session s ,DBA_DATAPUMP_SESSIONS d where username ='CIP_USER_PED1'
    and program like 'orac%'
    and s.saddr = D.SADDR;
    Below is the only message I see in alert log.
    Tue Apr 3 19:15:52 2012
    The value (30) of MAXTRANS parameter ignored.
    kupprdp: master process DM00 started with pid=22, OS id=23456
    to execute - SYS.KUPM$MCP.MAIN('SYS_IMPORT_FULL_01', 'CIP_USER_PED1', 'KUPC$C_1_20120403191552', 'KUPC$S_1_20120403191552', 0);
    kupprdp: worker process DW01 started with worker id=1, pid=23, OS id=23458
    to execute - SYS.KUPW$WORKER.MAIN('SYS_IMPORT_FULL_01', 'CIP_USER_PED1');
    I have googled but haven't been able to resolve the issue.
    Can anyone help me with this ?
    Many Thanks
    Kalai

    km1612 wrote:
    Hi All,
    I am trying to import a dump file on Oracle DB, Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bi.
    My import hangs, below is the log.
    Import: Release 10.2.0.3.0 - 64bit Production on Tuesday, 03 April, 2012 21:09:20
    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    With the Partitioning and Data Mining options
    Master table "CIP_USER_PED1"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
    Starting "CIP_USER_PED1"."SYS_IMPORT_FULL_01": ****@** remap_schema=**** EXCLUDE=STATISTICS
    Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
    Processing object type SCHEMA_EXPORT/TABLE/TABLE
    Processing object type SCHEMA_EXPORT/VIEW/VIEW
    It doesn't proceed further.
    I am not seeing and sessions blocked or locked but I see session waits.
    SESSION_TYPE     STATUS     SECONDS_IN_WAIT     WAIT_CLASS     EVENT
    MASTER     ACTIVE     468     Idle     wait for unread message on broadcast channel
    WORKER     ACTIVE     467     User I/O     direct path read
    select d.session_type, status, seconds_in_wait, wait_class , event from v$session s ,DBA_DATAPUMP_SESSIONS d where username ='CIP_USER_PED1'
    and program like 'orac%'
    and s.saddr = D.SADDR;
    Below is the only message I see in alert log.
    Tue Apr 3 19:15:52 2012
    The value (30) of MAXTRANS parameter ignored.
    kupprdp: master process DM00 started with pid=22, OS id=23456
    to execute - SYS.KUPM$MCP.MAIN('SYS_IMPORT_FULL_01', 'CIP_USER_PED1', 'KUPC$C_1_20120403191552', 'KUPC$S_1_20120403191552', 0);
    kupprdp: worker process DW01 started with worker id=1, pid=23, OS id=23458
    to execute - SYS.KUPW$WORKER.MAIN('SYS_IMPORT_FULL_01', 'CIP_USER_PED1');
    I have googled but haven't been able to resolve the issue.
    Can anyone help me with this ?
    Many Thanks
    Kalaiproper invocation of procedure below will allow you to monitor progress
    by observing contents of resultant trace file
    DBMS_MONITOR.SESSION_TRACE_ENABLE(
        session_id   IN  BINARY_INTEGER DEFAULT NULL,
        serial_num   IN  BINARY_INTEGER DEFAULT NULL,
        waits        IN  BOOLEAN DEFAULT TRUE,
        binds        IN  BOOLEAN DEFAULT FALSE,
        plan_stat    IN  VARCHAR2 DEFAULT NULL);

  • HT201263 in restoring process,"waiting for ipad" is written in itunes after that it get stuck

    in restoring process,"waiting for ipad" is written in itunes after that it get stuck

    iPad: Unable to update or restore
    http://support.apple.com/kb/ht4097
    iTunes: Specific update-and-restore error messages and advanced troubleshooting
    http://support.apple.com/kb/TS3694
    If you can’t update or restore your iOS device
    http://support.apple.com/kb/ht1808
    iPad Stuck in Recovery Mode after Update
    http://www.transfer-iphone-recovery.com/ipad-stuck-in-recovery-mode-after-update .html
    iOS: Apple logo with progress bar after updating or restoring from backup
    http://support.apple.com/kb/TS3681
     Cheers, Tom

  • Create sql trace files on client machine

    Hi
    oracle creates sql trace files on server side, what are possible and best ways of sharing those files with end users? is it possible to create them on client side instead?

    Dbb wrote:
    Hi
    Hi
    oracle creates sql trace files on server side,
    Yes
    what are possible and best ways of sharing those files with end users?
    Using shared directory. Use the parameters dump to point to it
    is it possible to create them on client side instead?
    No
    . :-) any help with my english is wellcome :-) .does this mean sharing user_dump destination at linux level and then mounting it from client machines ( win xp )?is there any doc on this?

Maybe you are looking for

  • Adobe Reader 11.0.09 - some certificates no longer accepted for signing PDF forms

    After installation of the patch 11.0.09 of Adobe Reader some certificates are no longer accepted for signing (not enlisted in list of certificates) when clicking the sign fields of PDF forms. The forms were created using LiveCycle Designer. Is it by

  • Flash Player 9 Won't Install on MacOS X (Intel)

    I received an alert trying to open a video on MSN that I didn't have the Flash plugin installed (which I, in fact, did). I clicked the link to download the new Flash 9 and was re-directed to the Adobe Flash Player download page. I installed the playe

  • Problem when HD movie (Final Cut Pro edited) viewed at smaller sizes on youtube

    Does anyone else get occasional lines during motion (not interlacing) and motion jerks with their HD video when it is viewed in anything other than Full Size HD? I am having this problem and it's driving me crazy trying to figure out where the proble

  • Mms pictures size

    Hello, Can't send pictures taken by my iPhone 4s via mms. Is the size? How to overcome this? Thank you,,,

  • Managing user groups

    Hi, I have just begun to look into Connect as an LMS solution for an upcoming project. I will have a number of user groups - different clients (companes) that I will want to have organized with access permissions. is there an area in Connect where I