Commit Clarification

Hi Guys,
Iam loading data from as/400 to oracle using LKM SQL TO ORACLE and IKM SQL CONTROL APPENED.
All works OK,but i got a clarification n regarding commit.
Just in case something goes wrong when interface loaded 50% of records,does it rollbakcs all or commits up to 50% and fails?
Cheers

open the IKM and go to insert new rows step. You can find commit options. Refer this
Question about ODI COMMIT

Similar Messages

  • Need clarification on using *COMMIT

    Hi,
    I have the following code, which works perfectly fine on its on.
    *XDIM_MEMBERSET CATEGORY=ACTUAL
    *WHEN A_ACCOUNT
       *IS SALARY
                *REC(FACTOR=1,CATEGORY="BUDGET")
    *ENDWHEN
    It'll overwrite the value in BUDGET.SALARY with the modifed ACTUAL.SALARY. Such overwrite is what I want.
    I need another calculation for A_ACCOUNT.KPI0007; so I add the following code before the code written above. I got a correct value for KPI0007.
    However, the above code does not perform well. The new value of ACTUAL.SALARY will be added up to the existing value of BUDGET.SALARY instead of overwriting BUDGET.SALARY. This is NOT what I want
    *WHEN A_ACCOUNT
       *IS SALARY
          *REC(EXPRESSION=(GET(A_ACCOUNT="SALARY")+GET(A_ACCOUNT="BONUS")*0.7),A_ACCOUNT="KPI0007")
    *ENDWHEN
    Somehow, I accidentally solve the issue by putting COMMIT in between, which looks like the following. I know that COMMIT is used to put the calculated values into the database. Yet why this is needed in this situation?
    *WHEN A_ACCOUNT
       *IS SALARY
          *REC(EXPRESSION=(GET(A_ACCOUNT="SALARY")+GET(A_ACCOUNT="BONUS")*0.7),A_ACCOUNT="KPI0007")
    *ENDWHEN
    *COMMIT
    *XDIM_MEMBERSET CATEGORY=ACTUAL
    *WHEN A_ACCOUNT
       *IS SALARY
                *REC(FACTOR=1,CATEGORY="BUDGET")
    *ENDWHEN
    Could somebody help explain this?
    Thank you!
    Sunny
    Edited by: Sunny Im on Jan 15, 2009 2:58 PM
    Edited by: Sunny Im on Jan 15, 2009 2:59 PM

    Hi Sorin,
    Thanks for your prompt reply. You have explained your example to me very well. However, I think my case is a bit different from it.
    In my first WHEN statement, I put the calculated result into KPI0007. Yet I'm not re-using KPI0007 in my 2nd WHEN statement.
    *WHEN A_ACCOUNT
       *IS SALARY
          *REC(EXPRESSION=(GET(A_ACCOUNT="SALARY")+GET(A_ACCOUNT="BONUS")*0.7),A_ACCOUNT="KPI0007")
    *ENDWHEN
    *COMMIT
    *XDIM_MEMBERSET CATEGORY=ACTUAL
    *WHEN A_ACCOUNT
       *IS SALARY
                *REC(FACTOR=1,CATEGORY="BUDGET")
    *ENDWHEN
    It's strange to me that if I don't put COMMIT in between, the 2nd WHEN statement won't use revised ACTUAL.SALARY value and overwrite it to BUDGET.SALARY. Instead, it will keep on aggregating revised ACTUAL.SALARY value to BUDGET.SALARY.

  • Sun Java Comm Suite - Need Clarification of Licensing

    Now that Sun has stripped out the email services from JES 5 and thrown them into another product, must I now pay to USE Sun Java Communication Suite or is the "licensing" strictly for support?
    I must be a blithering idiot because for the life of me I cannot find the issue documented CLEARLY anywhere....."when you are ready to deploy" is a very vague term. I wish the marketing masters would use something like "Ok, it was free, now you have to pay" or "Feel free to use it, but if you want patches, etc. you have to fork over some money"
    I guess the simple question is, will I violate the license agreement if I download and use the Sun Java Communications Suite without paying for a license?
    Thanks,
    Phil

    Hi,
    Now that Sun has stripped out the email services from
    JES 5 and thrown them into another product, must I
    now pay to USE Sun Java Communication Suite or is the
    "licensing" strictly for support?Well I know you can't get support unless you pay for it :)
    I must be a blithering idiot because for the life of
    me I cannot find the issue documented CLEARLY
    anywhere....."when you are ready to deploy" is a very
    vague term. There is an underlying assumption here that running with unsupported software in a deployed environment is a bad idea (TM). This is basically saying the software is available for evaluation purposes.. which doesn't necessarily preclude from being available for production purposes... but it doesn't expressly include it either :)
    I wish the marketing masters would use
    something like "Ok, it was free, now you have to pay"
    or "Feel free to use it, but if you want patches,
    etc. you have to fork over some money"
    I guess the simple question is, will I violate the
    license agreement if I download and use the Sun Java
    Communications Suite without paying for a license?When you install the software, the software licence agreement outlines the terms of 'permitted use' (section 3).
    From my reading of the licence agreement, if you do not have an entitlement to the license (which I would guess you don't) then the software is for evaluation use.. which is defined as:
    3(a) Evaluation Use. You may evaluate Software internally for a period of 90 days from your first use.
    For any more clarification you may want to ask a Sun sales rep.
    Regards,
    Shane.

  • Insert without commit (implicit and explicit) stores data.

    Hi all:
    I have this piece of code:
    REPORT  ztest.
    DATA: wa_zsic_abonos_chk TYPE zsic_abonos_chk. "Is a transparent table
    START-OF-SELECTION.
       wa_zsic_abonos_chk-bukrs = 'MU01'.
       wa_zsic_abonos_chk-belnr = '99999'.
       wa_zsic_abonos_chk-gjahr = '2008'.
      INSERT into zsic_abonos_chk values wa_zsic_abonos_chk.
      IF sy-subrc <> 0.
        WRITE:/ 'register NOT inserted'.
      ELSE
        WRITE:/ 'register inserted'.
      ENDIF.
      SELECT SINGLE *
        INTO   wa_zsic_abonos_chk
      FROM zsic_abonos_chk
      WHERE bukrs = 'MU01'
        AND belnr = '99999'
        AND gjahr = '2008'.
      IF sy-subrc = 0.
        WRITE:/ 'register found!.'.
      ELSE.
        WRITE:/ 'register NOT found.'.
      ENDIF.
    When I run this code I get this results:
    register inserted
    register found!
    This is surprising to me, because I did not write the COMMIT WORK after the Insert.
    What I expected to have was:
    register inserted
    register NOT found
    zsic_abonos_chk is a transparent table without buffering, and the register does not exists before the Insert.
    The Database system is running below SAP is DB2.
    Can someone explain this behavior?
    Thanks in advance
    Jordi

    Today I spent some time to close this thread.
    I made this test:
    REPORT  ZTEST.
    DATA: wa_zsic_abonos_chk TYPE zsic_abonos_chk. "Is a transparent table
    START-OF-SELECTION.
       wa_zsic_abonos_chk-bukrs = 'MU01'.
       wa_zsic_abonos_chk-belnr = '999949'.
       wa_zsic_abonos_chk-gjahr = '2008'.
      INSERT into zsic_abonos_chk values wa_zsic_abonos_chk.
      IF sy-subrc <> 0.
        WRITE:/ 'register NOT inserted'.
      ELSE.
        WRITE:/ 'register inserted'.
      ENDIF.
      SELECT SINGLE *
        INTO   wa_zsic_abonos_chk
      FROM zsic_abonos_chk
      WHERE bukrs = wa_zsic_abonos_chk-bukrs
        AND belnr = wa_zsic_abonos_chk-belnr
        AND gjahr = wa_zsic_abonos_chk-gjahr.
      IF sy-subrc = 0.
        WRITE:/ 'register found!.'.
      ELSE.
        WRITE:/ 'register NOT found.'.
      ENDIF.
      ROLLBACK WORK.
      SELECT SINGLE *
        INTO   wa_zsic_abonos_chk
      FROM zsic_abonos_chk
      WHERE bukrs = wa_zsic_abonos_chk-bukrs
        AND belnr = wa_zsic_abonos_chk-belnr
        AND gjahr = wa_zsic_abonos_chk-gjahr.
      IF sy-subrc = 0.
        WRITE:/ 'register found!.'.
      ELSE.
        WRITE:/ 'register NOT found.'.
      ENDIF.
    The result is:
    register inserted 
    register found!.  
    register NOT found.
    This clarifies the matter.

  • Clarification on Data Guard(Physical Standyb db)

    Hi guys,
    I have been trying to setup up Data Guard with a physical standby database for the past few weeks and I think I have managed to setup it up and also perform a switchover. I have been reading a lot of websites and even Oracle Docs for this.
    However I need clarification on the setup and whether or not it is working as expected.
    My environment is Windows 32bit (Windows 2003)
    Oracle 10.2.0.2 (Client/Server)
    2 Physical machines
    Here is what I have done.
    Machine 1
    1. Create a primary database using standard DBCA, hence the Oracle service(oradgp) and password file are also created along with the listener service.
    2. Modify the pfile to include the following:-
    oradgp.__db_cache_size=436207616
    oradgp.__java_pool_size=4194304
    oradgp.__large_pool_size=4194304
    oradgp.__shared_pool_size=159383552
    oradgp.__streams_pool_size=0
    *.audit_file_dest='M:\oracle\product\10.2.0\admin\oradgp\adump'
    *.background_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\bdump'
    *.compatible='10.2.0.3.0'
    *.control_files='M:\oracle\product\10.2.0\oradata\oradgp\control01.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control02.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control03.ctl'
    *.core_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_name='oradgp'
    *.db_recovery_file_dest='M:\oracle\product\10.2.0\flash_recovery_area'
    *.db_recovery_file_dest_size=21474836480
    *.fal_client='oradgp'
    *.fal_server='oradgs'
    *.job_queue_processes=10
    *.log_archive_dest_1='LOCATION=E:\ArchLogs VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=oradgp'
    *.log_archive_dest_2='SERVICE=oradgs LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=oradgs'
    *.log_archive_format='ARC%S_%R.%T'
    *.log_archive_max_processes=30
    *.nls_territory='IRELAND'
    *.open_cursors=300
    *.pga_aggregate_target=203423744
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=612368384
    *.standby_file_management='auto'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\udump'
    *.service_names=oradgp
    The locations on the harddisk are all available and archived redo are created (e:\archlogs)
    3. I then add the necessary (4) standby logs on primary.
    4. To replicate the db on the machine 2(standby db), I did an RMAN backup as:-
    RMAN> run
    {allocate channel d1 type disk format='M:\DGBackup\stby_%U.bak';
    backup database plus archivelog delete input;
    5. I then copied over the standby~.bak files created from machine1 to machine2 to the same directory (M:\DBBackup) since I maintained the directory structure exactly the same between the 2 machines.
    6. Then created a standby controlfile. (At this time the db was in open/write mode).
    7. I then copied this standby ctl file to machine2 under the same directory structure (M:\oracle\product\10.2.0\oradata\oradgp) and replicated the same ctl file into 3 different files such as: CONTROL01.CTL, CONTROL02.CTL & CONTROL03.CTL
    Machine2
    8. I created an Oracle service called the same as primary (oradgp).
    9. Created a listener also.
    9. Set the Oracle Home & SID to the same name as primary (oradgp) <<<-- I am not sure about the sid one.
    10. I then copied over the pfile from the primary to standby and created an spfile with this one.
    It looks like this:-
    oradgp.__db_cache_size=436207616
    oradgp.__java_pool_size=4194304
    oradgp.__large_pool_size=4194304
    oradgp.__shared_pool_size=159383552
    oradgp.__streams_pool_size=0
    *.audit_file_dest='M:\oracle\product\10.2.0\admin\oradgp\adump'
    *.background_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\bdump'
    *.compatible='10.2.0.3.0'
    *.control_files='M:\oracle\product\10.2.0\oradata\oradgp\control01.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control02.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control03.ctl'
    *.core_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_name='oradgp'
    *.db_recovery_file_dest='M:\oracle\product\10.2.0\flash_recovery_area'
    *.db_recovery_file_dest_size=21474836480
    *.fal_client='oradgs'
    *.fal_server='oradgp'
    *.job_queue_processes=10
    *.log_archive_dest_1='LOCATION=E:\ArchLogs VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=oradgs'
    *.log_archive_dest_2='SERVICE=oradgp LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=oradgp'
    *.log_archive_format='ARC%S_%R.%T'
    *.log_archive_max_processes=30
    *.nls_territory='IRELAND'
    *.open_cursors=300
    *.pga_aggregate_target=203423744
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=612368384
    *.standby_file_management='auto'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\udump'
    *.service_names=oradgs
    log_file_name_convert='junk','junk'
    11. User RMAN to restore the db as:-
    RMAN> startup mount;
    RMAN> restore database;
    Then RMAN created the datafiles.
    12. I then added the same number (4) of standby redo logs to machine2.
    13. Also added a tempfile though the temp tablespace was created per the restore via RMAN, I think the actual file (temp01.dbf) didn't get created, so I manually created the tempfile.
    14. Ensuring the listener and Oracle service were running and that the database on machine2 was in MOUNT mode, I then started the redo apply using:-
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
    It seems to have started the redo apply as I've checked the alert log and noticed that the sequence# was all "YES" for applied.
    ****However I noticed that in the alert log the standby was complaining about the online REDO log not being present****
    So copied over the REDO logs from the primary machine and placed them in the same directory structure of the standby.
    ########Q1. I understand that the standby database does not need online REDO Logs but why is it reporting in the alert log then??########
    I wanted to enable realtime apply so, I cancelled the recover by :-
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
    and issued:-
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
    This too was successful and I noticed that the recovery mode is set to MANAGED REAL TIME APPLY.
    Checked this via the primary database also and it too reported that the DEST_2 is in MANAGED REAL TIME APPLY.
    Also performed a log swith on primary and it got transported to the standby and was applied (YES).
    Also ensured that there are no gaps via some queries where no rows were returned.
    15. I now wanted to perform a switchover, hence issued:-
    Primary_SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN;
    All the archivers stopped as expected.
    16. Now on machine2:
    Stdby_SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;
    17. On machine1:
    Primary_Now_Standby_SQL>SHUTDOWN IMMEDIATE;
    Primary_Now_Standby_SQL>STARTUP MOUNT;
    Primary_Now_Standby_SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
    17. On machine2:
    Stdby_Now_Primary_SQL>ALTER DATABASE OPEN;
    Checked by switching the logfile on the new primary and ensured that the standby received this logfile and was applied (YES).
    However, here are my questions for clarifications:-
    Q1. There is a question about ONLINE REDO LOGS within "#" characters.
    Q2. Do you see me doing anything wrong in regards to naming the directory structures? Should I have renamed the dbname directory in the Oracle Home to oradgs rather than oradgp?
    Q3. When I enabled real time apply does that mean, that I am not in 'MANAGED' mode anymore? Is there an un-managed mode also?
    Q4. After the switchover, I have noticed that the MRP0 process is "APPLYING LOG" status to a sequence# which is not even the latest sequence# as per v$archived_log. By this I mean:-
    SQL> SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS,FROM V$MANAGED_STANDBY;
    MRP0 APPLYING_LOG 1 47 452 1024000
    but :
    SQL> select max(sequence#) from v$archived_log;
    46
    Why is that? Also I have noticed that one of the sequence#s is NOT applied but the later ones are:-
    SQL> SELECT SEQUENCE#,APPLIED FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;
    42 NO
    43 YES
    44 YES
    45 YES
    46 YES
    What could be the possible reasons why sequence# 42 didn't get applied but the others did?
    After reading several documents I am confused at this stage because I have read that you can setup standby databases using 'standby' logs but is there another method without using standby logs?
    Q5. The log switch isn't happening automatically on the primary database where I could see the whole process happening on it own, such as generation of a new logfile, that being transported to the standby and then being applied on the standby.
    Could this be due to inactivity on the primary database as I am not doing anything on it?
    Sorry if I have missed out something guys but I tried to put in as much detail as I remember...
    Thank you very much in advance.
    Regards,
    Bharath
    Edited by: Bharath3 on Jan 22, 2010 2:13 AM

    Parameters:
    Missing on the Primary:
    DB_UNIQUE_NAME=oradgp
    LOG_ARCHIVE_CONFIG=DG_CONFIG=(oradgp, oradgs)
    Missing on the Standby:
    DB_UNIQUE_NAME=oradgs
    LOG_ARCHIVE_CONFIG=DG_CONFIG=(oradgp, oradgs)
    You said: Also added a tempfile though the temp tablespace was created per the restore via RMAN, I think the actual file (temp01.dbf) didn't get created, so I manually created the tempfile.
    RMAN should have also added the temp file. Note that as of 11g RMAN duplicate for standby will also add the standby redo log files at the standby if they already existed on the Primary when you took the backup.
    You said: ****However I noticed that in the alert log the standby was complaining about the online REDO log not being present****
    That is just the weird error that the RDBMS returns when the database tries to find the online redo log files. You see that at the start of the MRP because it tries to open them and if it gets the error it will manually create them based on their file definition in the controlfile combined with LOG_FILE_NAME_CONVERT if they are in a different place from the Primary.
    Your questions (Q1 answered above):
    You said: Q2. Do you see me doing anything wrong in regards to naming the directory structures? Should I have renamed the dbname directory in the Oracle Home to oradgs rather than oradgp?
    Up to you. Not a requirement.
    You said: Q3. When I enabled real time apply does that mean, that I am not in 'MANAGED' mode anymore? Is there an un-managed mode also?
    You are always in MANAGED mode when you use the RECOVER MANAGED STANDBY DATABASE command. If you use manual recovery "RECOVER STANDBY DATABASE" (NOT RECOMMENDED EVER ON A STANDBY DATABASE) then you are effectively in 'non-managed' mode although we do not call it that.
    You said: Q4. After the switchover, I have noticed that the MRP0 process is "APPLYING LOG" status to a sequence# which is not even the latest sequence# as per v$archived_log. By this I mean:-
    Log 46 (in your example) is the last FULL and ARCHIVED log hence that is the latest one to show up in V$ARCHIVED_LOG as that is a list of fully archived log files. Sequence 47 is the one that is current in the Primary online redo log and also current in the standby's standby redo log and as you are using real time apply that is the one it is applying.
    You said: What could be the possible reasons why sequence# 42 didn't get applied but the others did?
    42 was probably a gap. Select the FAL columns as well and it will proably say 'YES' for FAL. We do not update the Primary's controlfile everytime we resolve a gap. Try the same command on the standby and you will see that 42 was indeed applied. Redo can never be applied out of order so the max(sequence#) from v$archived_log where applied = 'YES' will tell you that every sequence before that number has to have been applied.
    You said: After reading several documents I am confused at this stage because I have read that you can setup standby databases using 'standby' logs but is there another method without using standby logs?
    Yes, If you do not have standby redo log files on the standby then we write directly to an archive log. Which means potential large data loss at failover and no real time apply. That was the old 9i method for ARCH. Don't do that. Always have standby redo logs (SRL)
    You said: Q5. The log switch isn't happening automatically on the primary database where I could see the whole process happening on it own, such as generation of a new logfile, that being transported to the standby and then being applied on the standby.
    Could this be due to inactivity on the primary database as I am not doing anything on it?
    Log switches on the Primary happen when the current log gets full, a log switch has not happened for the number of seconds you specified in the ARCHIVE_LAG_TARGET parameter or you say ALTER SYSTEM SWITCH LOGFILE (or the other methods for switching log files. The heartbeat redo will eventually fill up an online log file but it is about 13 bytes so you do the math on how long that would take :^)
    You are shipping redo with ASYNC so we send the redo as it is commited, there is no wait for the log switch. And we are in real time apply so there is no wait for the log switch to apply that redo. In theroy you could create an online log file large enough to hold an entire day's worth of redo and never switch for the whole day and the standby would still be caught up with the primary.

  • Raise an Event in Redwood after a Commit to an External Oracle DB?

    Hello Netweaver Community / Redwood specific,
    I am at customer site and have been presented with the following scenario and subsequent question...
    The scenario is that there is a 3rd party product (powerBuilder program / windows executable) that is updating an Oracle database on a server outside of the SAP landscape.  This PB program is writing records to the oracle db that need to be picked up by XI process.  The idea is that after the PB program makes its COMMIT of the records, then the XI process needs to be triggered to start so it can pick up the records.
    So the question is...  How can an event be raised in Redwood after a COMMIT of records has been made to this oracle DB that is on a server outside of the SAP landscape?
    I am familiar with (and we are already doing) the raising of an event in Redwood from an ABAP program with function module that raises external events.
    This is what I know - please do not hesitate to ask for further clarification.
    Thank you,
    Dean Atteberry.

    Carol,
    Sorry for the delay! I am not that frequent to SAP forums. You can write to me on my mail if you have something important. We can always update forum note for everyone.
    I do not know XI that well. But If I understand correctly, you can trigger events in it based on arrival of a file. You can create a dummy job in Redwood on Unix or any filesystem you use to create a zero byte file once Powerbuilder job is ready.
    Let me know if this helps!
    - Bhushan

  • COMMIT WORK - performance problem

    Dear Fellow SDNers,
    I seek your help on the following problem:
    Scenario : Inbound idoc which updates an Outbound delivery with Picked quantity, posts the goods issue and then creates billing document
    Approach : I am using the function module SD_DELIVERY_UPDATE_PICKING to update the delivery from the idoc data and to post goods issue. Thereafter, i use BAPI_BILLINGDOC_CREATEMULTIPLE to create the billing document. Before calling this BAPI, I use a COMMIT WORK statement to update the relevant tables so as to enable invoice creation properly.
    Problem: The COMMIT WORK statement takes a lot of time to execute (I have no update tasks that could lead to this), so much that the idoc (probably) has a timeout and ends up in status 64. As a result, the succeeding part of the code (after COMMIT WORK) is not executed and the billing document is not created.
    When I debug this, the COMMIT WORK statement leads to a strange screen (which looks like a blank report output screen, with its title as "UPDATE CONTROL". However (of course), there is no timeout while debugging and the billing document is successfully created.
    Could anyone provide some pointers to solve this problem?
    regards,
    Priyank

    i have a custom function module Y_IDOC_INPUT_WMSPICK001 which is responsible for idoc inbound processing. SAP PI sends the inbound data to ECC and once this is done, this function module is executed.
    This FM has the following code sequence inside it...
    1) Call the FM SD_DELIVERY_UPDATE_PICKING
    2) COMMIT WORK AND WAIT.
    3) Call the BAPI_BILLINGDOC_CREATEMULTIPLE
    Step1 is successfully executed,  the step 2 takes a long time, and after that, the step 3 is not executed at all and the idoc ends up with a yellow light (status 64).
    hope it clarifies what I am doing
    regards,
    Priyank

  • Clarification?: Frank & Lynn's book - task flow "shared" data control scope

    I'm seeking clarification around shared data control scopes please, regarding a point made in Frank Nimphius and Lynn Munsinger's "Oracle Fusion Developer Guide" McGraw-Hill book.
    On page 229 there is a note that states "The data control scope can be shared only if the transaction is also shared". Presumably this implies that only the transaction options "Always Use Existing Transaction" or "Use Existing Transaction if Possible" are applicable for a shared data control scope.
    However this seems at odds with what the IDE supports, as you can also select the transaction options "<No Controller Transaction>" and "Always Begin New Transaction" when the data control scope is set to shared.
    What's correct? The IDE or the book?
    Your assistance appreciated.
    CM.

    Chris,
    "The data control scope can be shared only if the transaction is also shared"
    At least the book stands correct for what I could test in a simple test case:
    1. no transaction - no sharing
    - no master/detail synchronization. DC are bot shared
    - commit in called btf does not commit caller task flow
    2. "always use existing" transaction selects shared Data Control and automatically disables this field so there is no other option for this
    3. Share DataControl and "Always begin transaction"
    Committing transaction in called btf also commits the transaction in calling TF
    So bottom line is that the transaction handling in ADFc appears to be confusing as it only is a directive for the DataControl to interpret.
    Also see page 14 "Task flow "new transaction" vs. "new db connection"" of : http://www.oracle.com/technetwork/developer-tools/adf/learnmore/march2011-otn-harvest-351896.pdf
    In ADF BC it seems that separated transactions only exist if you use isolated mode. If you use shared and new transaction then basically the transactions are not isolated.
    Frank
    Ps.: I took an action item to follow up with development about what the expected ADF BC behavior for the controller settings are.

  • How to use COMMIT and ROLLBACK in BAPIs

    Hi experts,
        Can we use COMMIT or ROLLBACK in the BAPI just like how we implement it in the ABAP programming ?? If it is yes,
        Where can we exactly use normal COMMIT and where we use BAPI_TRANSACTION_COMMIT when we are implementing BAPIs ?
    Please clarify this. Any reply is really appreciated !!
    Thank you in advance.

    Hi
    see commit is the thing which saves the changes you made to the
    database otherwise imean if u have not done the commit work what
    happens is the changes will be killed once the your program life has
    been killed i think you got why we will do commit work
    BAPI's are the methods through which we can input the data i mean it
    is an interface technique it is a direct input method.
    for example you have inserted some data into one table by using this
    BAPI technique but you not done the Commit BAPI then wht happens is
    the changes u made to the database cannot been seen in the table these
    will get effective once u have Done the Commit BAPI
    i think i am clear to u
    Rollback
    see by taking the above example only we can know wht is Rollback it is nothing but UNDO option in ms office se untill saving if we want one step back we will do it by undo option am i right similalry untill commit ing i.e. nothing until saving the changes made u can delete the modified change i.e. u can go to the stage how it was previously ok
    i think u got me
    plzz reward if i am clear to u.......
    see once u have done commit u cant rollback
    same as once  u saved one document u cant undo the document changes i think u got me
    Thanks and regards
    plzz dont forget to reward if i am useful....
    u can contact me as my details are in my business card if u want any further clarification....

  • How the Auto Commit works in APEX

    Hi,
    I have a PL\SQL Process which consist of suppose insert / update / delete statements.
    insert into emp ()
    values ()
    update emp
    set
    delete from emp
    where empno = Just want to know..when apex do the Auto Commit..I mean after each DML operation.
    Thanks,
    Deepak

    Thanks..Varad for the response.
    one more clarification.
    suppose I have a PL\Process and I am having the following DML in sequence, DML is for the same table EMP
    insert into emp (empno) values (1234)
    delete from emp where empno != 1234First I am inserting into EMP and based on that insert, I am doing some delete from the same table EMP.
    so when I come to delete part, the data '1234' into EMP table must be commited, so want to know if I have to issue a COMMIT after the insert statement.
    Thanks,
    Deepak

  • Updating Comms 6 GA to U2, do I need to run comm_dssetup.pl again?

    I'm planning on upgrading our Comms 6 server from GA to U2.
    Do I need to run comm_dssetup.pl if I already ran it under Comms 6.0 GA?
    I read [the wiki|http://comm_dssetup.pl] and get the impression that this is required if you are going from a non 6.0 version of comms. Further clarification here or in the wiki would be great.

    karl.rossing wrote:
    Can i run commpkg upgrade and upgrade comm_dssetup.pl and Delegated admin before taking an outage to upgrade for the rest (mail, calendar, convergence and IM) of the components.The first step will be to run ./commpkg upgrade to upgrade dssetup (and only dssetup) and then run the comm_dssetup.pl command. This will result in the Directory Server being restarted after the schema changes have been applied (short outage period assuming everything goes smoothly).
    Once you have upgraded dssetup you can then run ./commpkg upgrade again to upgrade just Delegated Administrator. This will automatically deploy the updated version into the web-container. You will need to restart the web-container after the upgrade process has concluded.
    Are there any side effects from taking delegated admin offfline while the rest of comms is running.Where have you deployed Delegated Administrator? Is it in the same web-container as Convergence and UWC/CE?
    Regards,
    Shane.

  • Warn: : 2086: 32501/0x6e8e60: sbLogBufGCLockENQNoErr: The commit is perform

    getting the error from our log file
    Warn: : 2086: 32501/0x6e8e60: sbLogBufGCLockENQNoErr: The commit is performed without the benefit of group commit optimization (error 6003).
    i need clarifications on what this relates to please...

    Like many other databases, TimesTen implements a 'group commit' optimisation when performing durable commits (synchronosu commits to disk) in order to improve scalability and trhoughput. The is implemented via a spcial lock (the group commit lock) which is used to queue up multiple transactiosn that are waiting for a commit. As with all locks, It is possible that the attempt to acquire this lock may timeout in which case the transaction will continue to commit but without the potential performance benefit of group commit. it seems this is what is happenning in your case. Perhaps you could provide some more information which may shed some light on this:
    1. Exact version of TimesTen (output of ttVersion command).
    2. The ODBC settings for the datastore in question.
    3. If replication is being used or not and if it is details of the replication scheme.
    4. The kind of workload you are running (transactions per second, %age of reads versus writes)
    5. The specification of the hardware you are running on, especially the disk subsytem.
    Thanks, Chris

  • Currency reset when using commit binding in UIX page

    I have 3 views on this page in a master-detail-detail relationship.
    1) The first should NEVER change currency (it's selected on the previous page).
    2) the other two have range bindings associated with them
    3) the selection on the second view drives the 3rd
    4) the selection on the third (CHANGES AN EDITABLE ROW)
    PROBLEM IS THIS:
    When submitting changes to the 3rd view, I call the "Commit" action binding for the datacontrol; doing so seems to be resetting even the first view. How can I commit the data to the DB without resetting row currency on all view?
    Thanks in advance,
    Sacha

    A quick clarification. By reset I really mean re-executed; currency is changed to the first row for every view...
    Any idea how this can be avoided?

  • Clarification on 'BAPI_BUS2054_CREATE_MULTI'

    Hi Frnds,
      I need some clarification in         Bapi   'BAPI_BUS2054_CREATE_MULTI' :
       1. Whether BAPI_PROJECT_MAINTAIN can be used to create a WBS Elemnt in CJ02 with Custom fields Value ?If Yes, then How can we do?
       2. I know 'BAPI_BUS2054_CREATE_MULTI'  can be used to change the WBS Element.My Doubt is whether we can create a Project and WBS Element with this Bapi? .If Yes, then How can we do?.
       3. I want to upload the Custom field in Transaction CJ02.? How Can I do. Give me an Example?
      4. I heard that, to use the    BAPI 'BAPI_BUS2054_CREATE_MULTI'  SAP Note 637345 should be apllied. Whether this is true ?
        Please clarify my Doubts .
    Thanks in Advance.

    Anand,
    FU BAPI_BUS2054_CREATE_MULTI
    Short Text
    Create WBS Elements Using BAPI
    Functionality
    WBS elements can be created for a project with BAPI "BAPI_BUS2054_CREATE_MULTI". To do this, parameter "I_PROJECT_DEFINITION" must contain the project definition for which the WBS elements are to be created. The individual WBS elements with all required values must be entered in table "IT_WBS_ELEMENT_TABLE".
    The WBS elements are created next to each other, in the same sequence as in table "IT_WBS_ELEMENT_TABLE". A WBS element under which the new WBS elements are to be created can be specified in parameter "I_WBS_UP". A WBS element that is to be located directly to the left of the new WBS elements can be specified with parameter "I_WBS_LEFT". If "I_WBS_LEFT" is not specified, the new WBS elements are added on the left-hand side. If I_WBS_UP is also not specified, the new WBS elements are added on the left and on the first level.
    Before anything is created, the following is checked:
    Is another project already being processed in the LUW (Logical Unit of Work)?
    Can the project be locked?
    If one check was not successful, nothing is created. Otherwise, each WBS element is changed individually in "IT_WBS_ELEMENT_TABLE", although the following is checked first:
    Is the data consistent?
    If all checks were succussful, the individual WBS element is created in the document tables. Afterwards, the hierarchy is updated, that is the new elements are added in the appropriate place as described above. If an error occurs while this is being carried out, the new elements are created on the right-hand side, on the first level, and an error message is generated in the return table. An error can occur if the WBS element in I_WBS_UP and I_WBS_LEFT does not exist in the specified project, or I_WBS_UP is not directly above I_WBS_LEFT if both are specified, or because an inconsistency occurs in the hierarchy for another reason.
    As soon as a LUW (Logical Unit of Work) is completed with BAPI BAPI_PS_PRECOMMIT and COMMIT WORK, the WBS elements are finally changed.
    Only one project or WBS element from a project can be processed at a time in a LUW.
    The return parameter RETURN displays first an error or success message that shows whether the WBS elements could be created. The first message variable contains the object type, the second contains the object ID, and the fourth contains the GUID (if it could be read). All related messages that were generated during processing are listed underneath the error or success messages. The parameters of the individual messages are filled with the object ID.
    Notes
    1. Definition "Processing Unit"
    In the following, the term "processing unit" refers to a series of related processing steps.
    The first step in a processing unit is initialization, which is done by calling the BAPI BAPI_PS_INITIALIZATION.
    Afterwards, the individual BAPIs listed below can be used several times, if required.
    The processing unit ends when the final precommit (call BAPI BAPI_PS_PRECOMMIT) is executed with a subsequent COMMIT WORK (for example, the statement COMMIT WORK, the BAPI "BAPI_TRANSACTION_COMMIT" or the BapiService.TransactionCommit method).
    After the final COMMIT WORK, the next initialization opens a new processing unit via the BAPI "BAPI_PS_INITIALIZATION".
    In principal, the following applies to each individual processing unit.
    2. Creation of a Processing Unit
    Each processing unit must be initialized by calling the BAPI "BAPI_PS_INITIALIZATION" once.
    Afterwards, the following individual BAPIs can be used within a processing unit - they can also be used more than once, taking into account the "One-Project-Principle" explained below. This also means that an object created in the current processing unit by a CREATE-BAPI can be changed by a CHANGE-BAPI or STATUS-BAPI.
    Except for the BAPIs explicitly named below, you can only call up BAPIs that execute GET methods or READ methods only. In particular, the BAPIs for confirming a network may not be used with the individual BAPIs named below!
    Business Object ProjectDefinitionPI
    BAPI Method
    BAPI_BUS2001_CREATE ProjectDefinitionPI.CreateSingle
    BAPI_BUS2001_CHANGE ProjectDefinitionPI.Change
    BAPI_BUS2001_DELETE ProjectDefinitionPI.Delete
    BAPI_BUS2001_SET_STATUS ProjectDefinitionPI.SetStatus
    BAPI_BUS2001_PARTNER_CREATE_M ProjectDefinitionPI.PartnerCreateMultiple
    BAPI_BUS2001_PARTNER_CHANGE_M ProjectDefinitionPI.PartnerChangeMultiple
    BAPI_BUS2001_PARTNER_REMOVE_M ProjectDefinitionPI.PartnerRemoveMultiple
    Business Object WBSPI
    BAPI Method
    BAPI_BUS2054_CREATE_MULTI WBSPI.CreateMultiple
    BAPI_BUS2054_CHANGE_MULTI WBSPI.ChangeMultiple
    BAPI_BUS2054_DELETE_MULTI WBSPI.DeleteMultiple
    BAPI_BUS2001_SET_STATUS WBSPI.SetStatus
    Business Object NetworkPI
    BAPI Method
    BAPI_BUS2002_CREATE NetworkPI.CreateFromData
    BAPI_BUS2002_CHANGE NetworkPI.Change
    BAPI_BUS2002_DELETE NetworkPI.Delete
    BAPI_BUS2002_ACT_CREATE_MULTI NetworkPI.ActCreateMultiple
    BAPI_BUS2002_ACT_CHANGE_MULTI NetworkPI.ActChangeMultiple
    BAPI_BUS2002_ACT_DELETE_MULTI NetworkPI.ActDeleteMultiple
    BAPI_BUS2002_ACTELEM_CREATE_M NetworkPI.ActElemCreateMultiple
    BAPI_BUS2002_ACTELEM_CHANGE_M NetworkPI.ActElemChangeMultiple
    BAPI_BUS2002_ACTELEM_DELETE_M NetworkPI.ActElemDeleteMultiple
    BAPI_BUS2002_SET_STATUS NetworkPI.SetStatus
    The processing unit must be finished by calling the BAPIs BAPI_PS_PRECOMMIT and BAPI_TRANSACTION_COMMIT (in that order).
    3. One-Project Principle
    For technical reasons, only the project definition and the WBS elements of one project can be processed in a processing unit.
    More than one project is used, for example, if
    You create or change more than one project
    You have changed a project and want to change a network to which WBS elements from a different project are assigned
    You want to change various networks to which WBS elements from different projects are assigned
    You create or change a WBS assignment in a network so that a WBS element from a second project is used
    WBS elements from different projects are already assigned to a network (note: this type of network cannot be processed with the network BAPIs named above).
    If you define a report for calling BAPIs, this means that:
    The report may use a maximum of one project per processing unit. The individual BAPI calls must be distributed between more than one processing unit, which use a maximum of one project per processing unit.
    4. All-Or-Nothing Principle
    If an error occurs in a processing unit in an individual BAPI or in the BAPI "BAPI_PS_PRECOMMIT" (that is, the return table ET_RETURN contains at least one message of the type "E" (error), "A" (abnormal end) or "X" (exit), posting is not possible.
    If an error occurs in an individual BAPI and despite this you call the BAPI "BAPI_PS_PRECOMMIT", message CNIF_PI 056 is issued with message type I (information).
    If an error occurs in an individual BAPI or in the BAPI "BAPI_PS_PRECOMMIT", but despite this you execute a COMMIT WORK, the program that is currently in process is terminated and message CNIF_PI 056 is issued with message type X.
    This is to ensure data consistency for all objects created, changed, and/or deleted in the processing unit.
    Note that the processing unit to which this happens can no longer be successfully closed and therefore, no new processing unit can be started.
    However, you can set the current processing unit back to an initialized status by using a rollback work (for example, statement ROLLBACK WORK, the BAPI "BAPI_TRANSACTION_ROLLBACK" or the method BapiService.TransactionRollback). Technically speaking, this means that the previous LUW is terminated and a new LUW is started in the current processing unit.
    Note that in this case, the current processing unit does not have to be re-initialized.
    Also note that the rollback also takes place according to the "all-or-nothing" principle, that therefore all individual BAPIs carried out up to the rollback are discarded. After a rollback, you can, therefore, no longer refer to an object that was previously created in the current processing unit using a CREATE-BAPI.
    However, you can close the processing unit again after a rollback, using a PRECOMMIT and COMMIT WORK, as long as all individual BAPIs, and the precommit carried out after the rollback, finish without errors.
    You can carry out several rollbacks in a processing unit (technically: start a new LUW several times).
    5. Procedure in the Case of Errors
    As soon as an error occurs in an individual BAPI or in the BAPI "BAPI_PS_PRECOMMIT", you have the following options:
    Exit the report or the program that calls the BAPIs, the PRECOMMIT and the COMMIT WORK.
    Execute a rollback in the current processing unit.
    6. Rules for Posting
    After you have successfully called the individual BAPIs of a processing unit, you must call the PRECOMMIT "BAPI_PS_PRECOMMIT".
    If the PRECOMMIT is also successful, the COMMIT WORK must take place directly afterwards.
    In particular, note that after the PRECOMMIT, you cannot call other individual BAPIs again in the current processing unit.
    It is also not permitted to call the PRECOMMIT more than once in a processing unit.
    7. Recommendation "COMMIT WORK AND WAIT"
    If an object created in a processing unit is to be used in a subsequent processing unit (for example, as an account assignment object in a G/L account posting) it is recommended to call the commit work with the supplement "AND WAIT" or to set the parameters for the BAPI "BAPI_TRANSACTION_COMMIT" accordingly.
    8. Field Selection
    The field selection is a tool for influencing the user interface (that is, for the dialog). In the BAPIs, the settings from the field selection (for example, fields that are not ready for input or required-entry) are not taken into account.
    9. Using a date in the BAPI interface
    The BAPI must be provided with the date in the internal format YYYYMMDD (year month day). No special characters may be used.
    As a BAPI must work independent of user, the date cannot and should not be converted to the date format specified in the user-specific settings.
    10. Customer Enhancements of the BAPIs
    For the BAPIs used to create and change project definitions, WBS elements, networks, activities, and activity elements, you can automatically fill the fields of the tables PROJ, PRPS, AUFK, and AFVU that have been defined for customer enhancements in the standard system.
    For this purpose, help structures that contain the respective key fields, as well as the CI include of the table are supplied. The BAPIs contain the parameter ExtensionIN in which the enhancement fields can be entered and also provide BAdIs in which the entered values can be checked and, if required, processed further.
    CI Include Help Structure   Key
    CI_PROJ BAPI_TE_PROJECT_DEFINITION   PROJECT_DEFINITION
    CI_PRPS BAPI_TE_WBS_ELEMENT   WBS_ELEMENT
    CI_AUFK BAPI_TE_NETWORK   NETWORK
    CI_AFVU BAPI_TE_NETWORK_ACTIVITY   NETWORK ACTIVITY
    CI_AFVU BAPI_TE_NETWORK_ACT_ELEMENT   NETWORK ACTIVITY ELEMENT
    Procedure for Filling Standard Enhancements
    Before you call the BAPI for each object that is to be created or changed, for which you want to enter customer-specific table enhancement fields, add a data record to the container ExtensionIn:
    STRUCTURE:    Name of the corresponding help structure
    VALUEPART1:   Key of the object + start of the data part
    VALUEPART2-4: If required, the continuation of the data part
    VALUPART1 to VALUPART4 are therefore filled consecutively, first with the keys that identify the table rows and then with the values of the customer-specific fields. By structuring the container in this way, it is possible to transfer its content with one MOVE command to the structure of the BAPI table extension.
    Note that when objects are changed, all fields of the enhancements are overwritten (as opposed to the standard fields, where only those fields for which the respective update indicator is set are changed). Therefore, even if you only want to change one field, all the fields that you transfer in ExtensionIn must be filled.
    Checks and Further Processing
    Using the methods ...CREATE_EXIT1 or. ...CHANGE_EXIT1 of the BAdI BAPIEXT_BUS2001, BAPIEXT_BUS2002, and BAPIEXT_BUS2054, you can check the entered values (and/or carry out other checks).
    In the BAdI's second method, you can program that the data transferred to the BAPI is processed further (if you only want to transfer the fields of the CI includes, no more action is required here).
    For more information, refer to the SAP Library under Cross-Application Components -> Business Framework Architecture -> Enhancements, Modifications ... -> Customer Enhancement and Modification of BAPIs -> Customer Enhancement of BAPIs (CA-BFA).
    Further information
    For more information, refer to the SAP Library under Project System -> Structures -> Project System Interfaces -> PS-EPS Interface to External Project Management Systems.
    Parameters
    I_PROJECT_DEFINITION
    IT_WBS_ELEMENT
    ET_RETURN
    EXTENSIONIN
    EXTENSIONOUT
    Exceptions
    Function Group
    CJ2054

  • REGEXP_SUBSTR for comma delimited list with null values

    Hi,
    I have a column which stores a comma delimited list of values. Some of these values in the list may be null. I'm having some issues trying to extract the values using the REGEXP_SUBSTR function when null values are present. Here are two things that I've tried:
    SELECT
       REGEXP_SUBSTR (val, '[^,]*', 1, 1) pos1
      ,REGEXP_SUBSTR (val, '[^,]*', 1, 2) pos2
      ,REGEXP_SUBSTR (val, '[^,]*', 1, 3) pos3
      ,REGEXP_SUBSTR (val, '[^,]*', 1, 4) pos4
      ,REGEXP_SUBSTR (val, '[^,]*', 1, 5) pos5
    FROM (SELECT 'AAA,BBB,,DDD,,FFF' val FROM dual);
    POS P POS P P
    AAA   BBB
    SELECT
       REGEXP_SUBSTR (val, '[^,]+', 1, 1) pos1
      ,REGEXP_SUBSTR (val, '[^,]+', 1, 2) pos2
      ,REGEXP_SUBSTR (val, '[^,]+', 1, 3) pos3
      ,REGEXP_SUBSTR (val, '[^,]+', 1, 4) pos4
      ,REGEXP_SUBSTR (val, '[^,]+', 1, 5) pos5
    FROM (SELECT 'AAA,BBB,,DDD,,FFF' val FROM dual);
    POS POS POS POS P
    AAA BBB DDD FFFAs you can see neither of the calls works correctly. Does anyone know how to modify the regular expression pattern to handle null values? I've tried various other patterns but was unable to get anyone to work for all cases.
    Thanks,
    Martin
    http://www.ClariFit.com
    http://www.TalkApex.com

    Hi, Martin,
    This does what you want:
    SELECT
       RTRIM (REGEXP_SUBSTR (val, '[^,]*,', 1, 1), ',') pos1
      ,RTRIM (REGEXP_SUBSTR (val, '[^,]*,', 1, 2), ',') pos2
      ,RTRIM (REGEXP_SUBSTR (val, '[^,]*,', 1, 3), ',') pos3
      ,RTRIM (REGEXP_SUBSTR (val, '[^,]*,', 1, 4), ',') pos4
      ,RTRIM (REGEXP_SUBSTR (val || ','
                          , '[^,]*,', 1, 5), ',') pos5
    FROM (SELECT 'AAA,BBB,,DDD,,FFF' val FROM dual);The query above works in Oracle 10 or 11, but in Oracle 11, you can also do it with just REGEXP_SUBSTR, without using RTRIM:
    SELECT
       REGEXP_SUBSTR (val, '([^,]*),|$', 1, 1, NULL, 1) pos1
      ,REGEXP_SUBSTR (val, '([^,]*),|$', 1, 2, NULL, 1) pos2
      ,REGEXP_SUBSTR (val, '([^,]*),|$', 1, 3, NULL, 1) pos3
      ,REGEXP_SUBSTR (val, '([^,]*),|$', 1, 4, NULL, 1) pos4
      ,REGEXP_SUBSTR (val, '([^,]*),|$', 1, 5, NULL, 1) pos5
    FROM (SELECT 'AAA,BBB,,DDD,,FFF' val FROM dual);The problem with your first query was that it was looking for sub-strings of 0 or more non-commas. There was such as sub-string. consisting of 3 characters, starting at position 1, so it returned 'AAA', as expected. Then there was another sub-string, of 0 characters, starting at position 4, so it returned NULL. Then there was a sub-string of 3 characters starting at position 5, so it returned 'BBB'.
    The problem with your 2nd query was that it was looking for 1 or more non-commas. 'DDD' is the 3rd such sub-string.
    Edited by: Frank Kulash on Feb 16, 2012 11:36 AM
    Added Oracle 11 example

Maybe you are looking for

  • How can I disable an integrated webcam?

    Hi @ all! I got Arch Linux running on a Samsung Q45 Danyal here which works pretty fine. Since I'm trying to run it at the least powerconsumption possible when on the road I wrote a small bash-script to handle things: #!/bin/bash hal-disable-polling

  • QoS for CEM traffic

    I have two 3750ME switches connecting two ends of a LES100. On each switch, i have 3845 routers with the NM-CEM-4SER modules, plus direct to switch internet traffic. My complete setup is as shown in the attached PDF. My problem is that with the defau

  • How to delete single MRP element line item

    Hi gurus, I need to delete a SINGLE line item for an MRP list for materials.  Is there a transaction to do this?  I have tried MD08, but that only deletes whole lists.  I am trying to clear out old elements that are years old in my list for a materia

  • Get gnome to show raw images as thumbnails

    I want my gnome desktop to show RAW image files as thumbnails. I have an ubuntu machine that does this by default and I want this feature in my arch linux desktop. I've installed raw-thumbnailer and ufraw, but these didn't add RAW thumbnailing abilit

  • Solstice Disksuite 4.2.1

    I have recently upgraded to Solaris 8 from 2.6 (and Solstice Disksuite 4.1) and followed the instructions for installing Solstice Disksuite 4.2.1 on to SOlaris 8. I now have a package called SUNWmdg.2. I understand the '.2' indicates a 'duplicate' pa