Ora-12838

i owez encountered this error when i run my packages.
error message : cannot read / modify an object after modifying it in parallel.
tks.

Hi,
You will get the information from http://orafaq.com/maillist/oracle-l/2003/05/01/0052.htm url.
Thanks
Radhakrushna

Similar Messages

  • OWB Issue|| ORA-12838: cannot read/modify an object after modifying it in

    Hi
    I am not able to enter the data into the staging table .
    it is also not throwing any error ,only giving warnings.
    so as a next step i just took the intermediate result generation and try to run it
    but it is giving me an error in that
    ORA-12838: cannot read/modify an object after modifying it in parallel
    Can you please help me in tthis case.
    Thanks

    Hello,
    The error ORA-12838 occurs if you try to issue SELECT/DML immediately after completing parallel DML, within one transaction - it's not permitted.
    Start another transaction - issue Commit or Rollback first.

  • DRG-50857 and ORA-12838 error while INDEXTYPE IS CTXSYS.CONTEXT

    Hi Experts,
    I've been encountering an issue while creating INDEX.
    If I execute the below script from SQLDEVELOPER line by line, then the index gets created but the weirdest thing is, if I executed the same as one sql file from SQLPLUS or SQLDEVELOPER, I get the error shared below. As we need to deliver as one sql file to customer, how can we fix this issue?
    Below is the content of my sql file
    ALTER SESSION ENABLE PARALLEL DML;
    ALTER SESSION FORCE PARALLEL QUERY;
    CREATE INDEX APPS.TEST_N1 ON APPS.EMP_TABLE
        EMP_NAME
      INDEXTYPE IS CTXSYS.CONTEXT PARALLEL 32;
    ALTER INDEX APPS.TEST_N1 NOPARALLEL;
    COMMIT;
    Below is the error
    Session altered.
    Elapsed: 00:00:00.00
    Session altered.
    Elapsed: 00:00:00.00
    CREATE INDEX APPS.TEST_N1 ON APPS.EMP_TABLE
    ERROR at line 1:
    ORA-29855: error occurred in the execution of ODCIINDEXCREATE routine
    ORA-20000: Oracle Text error:
    DRG-50857: oracle error in dripref.set_preferences
    ORA-12838: cannot read/modify an object after modifying it in parallel
    ORA-06512: at "CTXSYS.DRUE", line 160
    ORA-06512: at "CTXSYS.TEXTINDEXMETHODS", line 366
    Elapsed: 00:00:02.82
    Index altered.
    Elapsed: 00:00:00.01
    Commit complete.

    I get the same error in 11g, as shown below, but not in 12c, as shown below that.  So, it looks like there has been a change in the way that Oracle Text does things between versions.  When you create an Oracle Text Context index, it creates some domain index tables.  Apparently there is some change involved behind the scenes in the dripref.set_preferences procedure that perhaps does an append or some such thing.  If you post this question in the Text space/sub-forum, with a link to this thread, you may get a more detailed explanation and perhaps a workaround from Oracle Text product manager Roger Ford or others.  Or, perhaps some moderator will move this thread there.
    SCOTT@orcl> SELECT banner FROM v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE    11.2.0.1.0    Production
    TNS for 64-bit Windows: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production
    5 rows selected.
    SCOTT@orcl> CREATE TABLE emp_table AS SELECT ename emp_name FROM emp;
    Table created.
    SCOTT@orcl> ALTER SESSION ENABLE PARALLEL DML;
    Session altered.
    SCOTT@orcl> ALTER SESSION FORCE PARALLEL QUERY;
    Session altered.
    SCOTT@orcl> CREATE INDEX SCOTT.TEST_N1 ON SCOTT.EMP_TABLE
      2    (
      3       EMP_NAME
      4    )
      5    INDEXTYPE IS CTXSYS.CONTEXT PARALLEL 32;
    CREATE INDEX SCOTT.TEST_N1 ON SCOTT.EMP_TABLE
    ERROR at line 1:
    ORA-29855: error occurred in the execution of ODCIINDEXCREATE routine
    ORA-20000: Oracle Text error:
    DRG-50857: oracle error in dripref.set_preferences
    ORA-12838: cannot read/modify an object after modifying it in parallel
    ORA-06512: at "CTXSYS.DRUE", line 160
    ORA-06512: at "CTXSYS.TEXTINDEXMETHODS", line 366
    SCOTT@orcl> ALTER INDEX SCOTT.TEST_N1 NOPARALLEL;
    Index altered.
    SCOTT@orcl> COMMIT;
    Commit complete.
    SCOTT@orcl12c> SELECT banner FROM v$version;
    BANNER
    Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
    PL/SQL Release 12.1.0.1.0 - Production
    CORE    12.1.0.1.0    Production
    TNS for 64-bit Windows: Version 12.1.0.1.0 - Production
    NLSRTL Version 12.1.0.1.0 - Production
    5 rows selected.
    SCOTT@orcl12c> CREATE TABLE emp_table AS SELECT ename emp_name FROM emp;
    Table created.
    SCOTT@orcl12c> ALTER SESSION ENABLE PARALLEL DML;
    Session altered.
    SCOTT@orcl12c> ALTER SESSION FORCE PARALLEL QUERY;
    Session altered.
    SCOTT@orcl12c> CREATE INDEX SCOTT.TEST_N1 ON SCOTT.EMP_TABLE
      2    (
      3       EMP_NAME
      4    )
      5    INDEXTYPE IS CTXSYS.CONTEXT PARALLEL 32;
    Index created.
    SCOTT@orcl12c> ALTER INDEX SCOTT.TEST_N1 NOPARALLEL;
    Index altered.
    SCOTT@orcl12c> COMMIT;
    Commit complete.

  • ORA-12838: cannot read/modify an object after modifying it in parallel

    I am getting the above error when i hit this part in my load ....
    can somebody suggest me why it could be in this case ?
    MERGE INTO PART1 H
    USING (SELECT PA1,                              PART1,                                   CON1,                                   GEO1,                                   COUN1,                                   KIT1                          FROM W1 ) WH ON
                                       (H.PART1=WH.PART1)                                   --on PK darshan 01-03-2006
         WHEN MATCHED THEN
    UPDATE SET                                   H.CON1     =WH.CON1,                         H.GEO1     = FN_GETGEOCODE ( WH.GEO1 ),               H.COUN1     =WH.COUN1,                         H.KIT1     =WH.KIT1,                    H.DT1     =TO_NUMBER(TO_CHAR(FN_GETGMTDATE(SYSDATE),'YYYYMMDD')),                              H.DT_LAST1          = FN_GETGMTDATE(SYSDATE)
              WHEN NOT MATCHED THEN
         INSERT     (H.PART1,                                   H.PART2,                                   H.CON1,                                   H.GEO1,                                   H.COUN1,                                   H.KITD1,                                   H.DT1,                                   H.DT_1,                                   H.DT_LAST_1)
              VALUES(CASE FN_MERGE_COUNTER(gpi_inserting)
                   WHEN 0 THEN     WH.PART1 END,                         WH.PARTN1,                              WH.CON1,                              FN_GETGEOCODE ( WH.GEO1),                    WH.COUNT1,                              WH.KIT1,
              TO_NUMBER(TO_CHAR(FN_GETGMTDATE(SYSDATE),'YYYYMMDD')),                                   PKG_COMMONACTIVITIES.FN_GETGMTDATE(SYSDATE),                                        PKG_COMMONACTIVITIES.FN_GETGMTDATE(SYSDATE)
                                            );

    does this give you a clue?
    SQL> CREATE TABLE t AS
      2  SELECT owner, object_name FROM all_objects
      3  WHERE 1=0;
    Table created.
    SQL> INSERT /*+ APPEND */ INTO t
      2  SELECT owner, object_name
      3  FROM all_objects;
    8982 rows created.
    SQL> SELECT * FROM t;
    SELECT * FROM t
    ERROR at line 1:
    ORA-12838: cannot read/modify an object after modifying it in parallelI bet that you need a commit after loading w1.
    John

  • ORA-12839: cannot modify an object in parallel after modifying it

    Hi,
    I am facing a Problem when I am trying to Update a Table:
    ORA-12839: cannot modify an object in parallel after modifying it
    How to rectify it?
    Any help will be highly needful
    Thanks and Regards

    Or if it needs to be the same transaction leave away any parallelization part as e.g. an append hint:
    SQL> create table emp2 as select * from emp where 1=2
    Table created.
    SQL> insert /*+ append */ into emp2 select * from emp
    14 rows created.
    SQL> update emp2 set sal=200 where empno=7788
    Error at line 11
    ORA-12838: cannot read/modify an object after modifying it in parallel

  • How to delete a project in Oracle apps R12

    Hi
    OS RHEL 5.5
    Database 11.1.0.7
    Applications R12
    Initially I have created a project in oracle apps from wrong template and modified the template then created a project with the same modified template.
    Now the wrong project needs to be deleted so how to delete the project which had been created from wrong template in oracle apps R12 and integrated project with MSP.
    Both the wrong and correct projects are referring to the same template.
    Please help me to delete the project as it is done in production database.
    Rgrd
    Edited by: 955685 on Feb 4, 2013 8:10 AM

    955685 wrote:
    Hi
    OS RHEL 5.5
    Database 11.1.0.7
    Applications R12
    Initially I have created a project in oracle apps from wrong template and modified the template then created a project with the same modified template.
    Now the wrong project needs to be deleted so how to delete the project which had been created from wrong template in oracle apps R12 and integrated project with MSP.
    Both the wrong and correct projects are referring to the same template.
    Please help me to delete the project as it is done in production database.
    Rgrd
    Edited by: 955685 on Feb 4, 2013 8:10 AMPlease see these docs/links.
    Delete Project Functionality not Working [ID 1060648.1]
    PRC: Delete Project Performance Reporting Data Completes With Error ORA-12838 ORA-6512 [ID 748674.1]
    How Can the PA Period Type Be Changed if It Has Been Entered Incorrectly? [ID 881424.1]
    PAXPREPR Unable To Delete Project [ID 874642.1]
    http://www.oracle.com/pls/ebs121/search?word=Delete+AND+Project&format=ranked&remark=quick_search
    Thanks,
    Hussein

  • Use of nologging clause

    Hi, I was trying to use nologging clause to improve performance of DML on one of the table. however I observed that table with nologging option actually decreases the performance :(
    please refer to following log.
    SQL> create table test_log(id int, name char(40))
    2 /
    Table created.
    Elapsed: 00:00:00.03
    SQL> create table test_nolog(id int, name char(40)) nologging
    2 /
    Table created.
    Elapsed: 00:00:00.00
    SQL> insert into test_log select ROWNUM*-1,DBMS_RANDOM.STRING('A',1) FROM DUAL CONNECT BY LEVEL <=1000000
    2 /
    1000000 rows created.
    Elapsed: 00:00:13.46
    SQL> insert into test_nolog select ROWNUM*-1,DBMS_RANDOM.STRING('A',1) FROM DUAL CONNECT BY LEVEL <=1000000
    2 /
    1000000 rows created.
    Elapsed: 00:00:16.95
    SQL> update test_log set id=100
    2 /
    1000000 rows updated.
    Elapsed: 00:00:46.35
    SQL> update test_nolog set id=100
    2 /
    1000000 rows updated.
    Elapsed: 00:00:49.43

    Insert and update have no impacts whether the tables are created with NOLOGGING or LOGGING clause
    It generates the same amount of redo for insert stmts as well as UPDATE stmts
    NOLOGGING can help only for the following things
    1.CTAS
    2.SQL*Loader in direct mode
    3.INSERT /*+APPEND*/ ...
    SYSTEM@rman 15/12/2008> truncate table  test_log;
    Table truncated.
    Elapsed: 00:00:01.49
    SYSTEM@rman 15/12/2008> truncate table test_nolog;
    Table truncated.
    Elapsed: 00:00:00.67
    SYSTEM@rman 15/12/2008> insert into test_nolog select ROWNUM*-1,DBMS_RANDOM.STRING('A',1) FROM DUAL CONNECT BY LEVEL <=1000000;
    1000000 rows created.
    Elapsed: 00:00:39.80
    Execution Plan
    Plan hash value: 1731520519
    | Id  | Operation                     | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | INSERT STATEMENT              |      |     1 |     2   (0)| 00:00:01 |
    |   1 |  COUNT                        |      |       |            |          |
    |*  2 |   CONNECT BY WITHOUT FILTERING|      |       |            |          |
    |   3 |    FAST DUAL                  |      |     1 |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - filter(LEVEL<=1000000)
    Statistics
           3081  recursive calls
          41111  db block gets
           8182  consistent gets
              0  physical reads
       60983504  _redo size_
            674  bytes sent via SQL*Net to client
            638  bytes received via SQL*Net from client
              3  SQL*Net roundtrips to/from client
              2  sorts (memory)
              0  sorts (disk)
        1000000  rows processed
    SYSTEM@rman 15/12/2008> commit;
    Commit complete.
    Elapsed: 00:00:00.03
    SYSTEM@rman 15/12/2008> insert into test_log select ROWNUM*-1,DBMS_RANDOM.STRING('A',1) FROM DUAL CONNECT BY LEVEL <=1000000;
    1000000 rows created.
    Elapsed: 00:00:38.79
    Execution Plan
    Plan hash value: 1731520519
    | Id  | Operation                     | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | INSERT STATEMENT              |      |     1 |     2   (0)| 00:00:01 |
    |   1 |  COUNT                        |      |       |            |          |
    |*  2 |   CONNECT BY WITHOUT FILTERING|      |       |            |          |
    |   3 |    FAST DUAL                  |      |     1 |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - filter(LEVEL<=1000000)
    Statistics
           3213  recursive calls
          41323  db block gets
           8261  consistent gets
              2  physical reads
       60993120  _redo_ size
            674  bytes sent via SQL*Net to client
            636  bytes received via SQL*Net from client
              3  SQL*Net roundtrips to/from client
              2  sorts (memory)
              0  sorts (disk)
        1000000  rows processed
    SYSTEM@rman 15/12/2008> commit;They simply generate the same anount of redo
    If you use APPEND hint you can reduce the INSERT stmt timings
    SYSTEM@rman 15/12/2008> truncate table test_nolog;
    Table truncated.
    Elapsed: 00:00:00.28
    SYSTEM@rman 15/12/2008> INSERT /*+ APPEND */ into test_nolog select ROWNUM*-1,DBMS_RANDOM.STRING('A',1) FROM DUAL CONNECT BY LEVEL <=100
    1000000 rows created.
    Elapsed: 00:00:28.19
    Execution Plan
    ERROR:
    ORA-12838: cannot read/modify an object after modifying it in parallel
    SP2-0612: Error generating AUTOTRACE EXPLAIN report
    Statistics
           3125  recursive calls
           8198  db block gets
            929  consistent gets
              0  physical reads
         161400  redo size
            660  bytes sent via SQL*Net to client
            652  bytes received via SQL*Net from client
              3  SQL*Net roundtrips to/from client
              2  sorts (memory)
              0  sorts (disk)
        1000000  rows processed
    SYSTEM@rman 15/12/2008> You can aslo notice significant time difference/redo generated between INSERT and INSERT with append on a NOLOGGING table

  • Insert /*+ Append */ and direct-path INSERT

    Hi Guys
    Does insert /*+ Append */ into hint cause Oracle 10G to use direct-path INSERT?
    and if insert /*+ Append */ into hint does cause Oracle to use direct-path INSERT, does insert /*+ Append */ is subject to the same restrictions as direct-path such as "The target table cannot have any triggers or referential integrity constraints defined on it."
    Thanks

    Dear,
    Here below a simple example showing the effet of existing trigger on the append hint
    mhouri@mhouri> select * from v$version where rownum=1;
    BANNER                                                                         
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production         
    mhouri@mhouri> create table b as select * from all_objects where 1 = 2;
    Table créée.
    mhouri@mhouri> insert /*+ append */ into b
      2  select * from all_objects;
    70986 ligne(s) créée(s).
    mhouri@mhouri> select * from b;
    select * from b
    ERREUR à la ligne 1 :
    ORA-12838: impossible de lire/modifier un objet après modification en parallèle
    mhouri@mhouri> rollback;
    Annulation (rollback) effectuée.The direct path took place as far as I can't select from the table before I commit
    mhouri@mhouri> create trigger b_trg before insert on b
      2  for each row
      3  begin
      4  null;
      5  end;
      6  /
    Déclencheur créé.
    mhouri@mhouri> insert /*+ append */ into b
      2  select * from all_objects;
    70987 ligne(s) créée(s).
    424 ligne(s) sélectionnée(s).
    mhouri@mhouri> select count(1) from b;
      COUNT(1)                                                                     
         70987                                                                      While in the presence of this trigger on the table, the append hint has been silently ignored by Oracle. The fact that I can select from the table immediately afte the insert has finished is the indication that the table has not be inserted using direct path load
    Best Regards
    Mohamed Houri

  • Tables stored in memory but may be accessed from other sessions

    Dear Oracle experts,
    it may be a stupid question but I'm searching for the best possibility to create a kind of temporary table which may be accessed from other sessions.
    Could you provide some hints/ catchwords to speed up my recherches.
    Thank you very much,
    Daniel

    danielwetzler wrote:
    danielwetzler wrote:
    I fear that the caching is not suitable for my case because of the reasons decribed in my other postings...First of all I don't think that you need to worry about the effectiveness of the caching in your particular case. In addition Oracle is very clever at when to actually write the dirty blocks from the buffer cache to the disks, so if your amount of data written to the result table is fairly small and no other activity is going on your system it won't get written to disk immediately anyway but stay in memory until any of the conditions are met that trigger the database writer to flush the blocks to disk.
    But there are options you could consider if you want to avoid as much of the overhead as possible and to write the results of your calculation to the result table as fast as possible.
    You could use direct-path inserts (INSERT /*+ APPEND */) and set the result table to "NOLOGGING". This way no undo and minimum redo is generated.
    Note however that there are certain caveats and restrictions to consider when using such an approach, e.g. your result table won't be recoverable (which you say is OK), only one direct-path insert is allowed simultaneously (it blocks the table exclusively, no other DML possible until you commit/rollback the transaction), and the direct-path insert has some restrictions. If any of these apply that prevent the direct-path operation then it silently falls back to "conventional" insert mode which generates undo and redo. One of the more annoying restrictions is that you can't read from a table that has been written to in direct-path mode within the same transaction, you first have to commit the transaction, otherwise you get "ORA-12838: cannot read/modify an object after modifying it in parallel". This means by simply adding the APPEND hint you might break existing logic.
    Finally the direct-path insert never re-uses free space in blocks below the current high-water mark, which means if you perform a direct-path insert and afterwards delete rows from the table and repeat again a direct-path insert operation your segment will grow and the empty space in the already used blocks won't get re-used. Best way would be to truncate the table rather than deleting rows from it.
    There are workarounds available to overcome some of these direct-path insert limitations (exclusive lock, truncate instead of delete etc.), like using a partitioned table (if you have a suitable edition/license), because direct-path inserts can be restricted to partitions. In this case you can do simultaneous direct-path inserts if you use different partitions, but you need then some kind of logic that determines which partition to use.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Commit Rollback for Parent & Child table

    Hi,
    I need to load data to a parent table and child table (Record by Record), i.e one record will be loaded to the parent table and the related child record will be loaded to child table.
    After first record loaded to child table, the next record will be loaded to the parent table.
    My requirement is, I should not commit the parent table before the child record is transferred (so that i can rollback if my record got failed). So I have set the parent table IKM commit option to NO. Because of this, my child record is not loaded to the correcsponding target table, its failed because the parent record is not commited.
    Do we have any possiblities to overcome this issue.
    Thanks in Advance,
    Ram Mohan T

    Cezar,
    I couldn't make the CKM options to "No Commit". When i did that i am getting the following error at step "16 - Control - CUSTOMER_DET - insert PK errors" ,
    12838 : 72000 : java.sql.SQLException: ORA-12838: cannot read/modify an object after modifying it in parallel.
    This error occurs at the CKM. so i have made all the CKM options to "commit" and IKM, LKM to "No Commit". It seems to be fine.
    Cezar,
    I have some plan for the scenario that i mentioned in the previous update (One dept id and all related employees). Please verify this,
    1) create a procedure which extracts dept_id from source tab and passes it to the scenario(in target tab) of a package.
    2) This package has the variable of dept_id, interface1 which loads data to dept and following that another procedure(2).
    3) This procedure2 will extract the emp_id that corresponds to the value of the variable dept_id. And passes this emp_id to tha scenario in target tab.
    4) This scenario is of a package which has the emp_id variable and interface2 for loading employees.
    While executing this plan, the problem is,
    1) Interface1 which loads Dept_Id is not commited (due to the KMs with commit set to "No Commit"), so that the employee records are getting loaded to the error table.
    2) I have made the interface1 KM commit option to "Default: Yes " (But still the Knowledge modules steps are No commit), but still the child records are getting loaded to error table.
    3) As per the above scenario, all these transforamtions are not taking in the same transaction. Thats the problem i believe.
    Do we have any possiblities to overcome this Cezar?
    Thanks in Advance,
    Ram Mohan T
    Edited by: T. Ram Mohan on Mar 5, 2009 11:43 AM

  • Insert using parallel

    what does this last statement mean?  it is as though the query runs just like without any hints.
    oracle doc:
    Using Parallel Execution
    Examples of Distributed Transaction Parallelization
    This section contains several examples of distributed transaction processing.
    Example 1 Distributed Transaction Parallelization
    In this example, the DML statement queries a remote object:
    INSERT /* APPEND PARALLEL (t3,2) */ INTO t3 SELECT * FROM t4@dblink;
    The query operation is executed serially without notification because it references a remote object.

    Randolf,
    As far as I have a real db link why not test it myself  (see my questions at the end of this thread)
    SQL> insert /*+ append parallel(lcl) */ into local_tab lcl select /*+ parralel(dst) */ dst.* from distant_tab dst;
    51 rows created.
    SQL> select * from table(dbms_xplan.display_cursor);
    PLAN_TABLE_OUTPUT
    SQL_ID  1nhmuzb4ayq7x, child number 0
    insert /*+ append parallel(lcl) */ into local_tab lcl select /*+
    parralel(dst) */ dst.* from distant_tab dst
    Plan hash value: 2098243032
    | Id  | Operation        | Name          | Rows  | Bytes | Cost (%CPU)| Time     | Inst   |IN-OUT|
    |   0 | INSERT STATEMENT |               |       |       |    57 (100)|          |        |      |
    |   1 |  LOAD AS SELECT  |               |       |       |            |          |        |      |
    |   2 |   REMOTE         | distant_tab   |    51 |  3009 |    57   (0)| 00:00:01 | XLCL~  | R->S |
    Remote SQL Information (identified by operation id):
       2 - SELECT /*+ OPAQUE_TRANSFORM */ "DST_ID","NAME_FR","NAME_NL","DST_UIC_CODE","DST_VOI
           C_CODE","BUR_UIC_CODE","BUR_VOIC_CODE","INFO_NEEDED","TRANSFERRED","VALID_FROM_DATE",
        "VALID_TO_DATE","SORTING","SO_NEEDED" FROM "distant_tab" "DST" (accessing'XLCL_XDST.WORLD' )
    Let enable parallel DML and repeat the same insert
    SQL> alter session enable parallel dml;
    Session altered.
    SQL> insert /*+ append parallel(lcl) */ into local_tab lcl select /*+ parralel(dst) */ dst.* from distant_tab dst;
    51 rows created.
    SQL> select * from table(dbms_xplan.display_cursor);
    PLAN_TABLE_OUTPUT
    SQL_ID  1nhmuzb4ayq7x, child number 1
    insert /*+ append parallel(lcl) */ into local_tab lcl select /*+ parralel(dst) */ dst.* from distant_tab dst
    Plan hash value: 2511483212
    | Id  | Operation               | Name          | Rows  | Bytes | Cost (%CPU)| Time     | TQ/Ins |IN-OUT| PQ Distrib |
    |   0 | INSERT STATEMENT        |               |       |       |    57 (100)|          |        |      |            |
    |   1 |  PX COORDINATOR         |               |       |       |            |          |        |      |            |
    |   2 |   PX SEND QC (RANDOM)   | :TQ10001      |    51 |  3009 |    57   (0)| 00:00:01 |  Q1,01 | P->S | QC (RAND)  |
    |   3 |    LOAD AS SELECT       |               |       |       |            |          |  Q1,01 | PCWP |            |
    |   4 |     PX RECEIVE          |               |    51 |  3009 |    57   (0)| 00:00:01 |  Q1,01 | PCWP |            |
    |   5 |      PX SEND ROUND-ROBIN| :TQ10000      |    51 |  3009 |    57   (0)| 00:00:01 |        | S->P | RND-ROBIN  |
    |   6 |       REMOTE            | distant_tab   |    51 |  3009 |    57   (0)| 00:00:01 | XLCL~  | R->S |            |
    Remote SQL Information (identified by operation id):
       6 - SELECT /*+ OPAQUE_TRANSFORM */ "DST_ID","NAME_FR","NAME_NL","DST_UIC_CODE","DST_VOIC_CODE","BUR_UIC_COD
           E","BUR_VOIC_CODE","INFO_NEEDED","TRANSFERRED","VALID_FROM_DATE","VALID_TO_DATE","SORTING","SO_NEEDED"
           FROM "distant_tab" "DST" (accessing 'XLCL_XDST.WORLD' )
    SQL> select * from local_tab;
    select * from local_tab
    ERROR at line 1:
    ORA-12838: cannot read/modify an object after modifying it in parallel
    SQL> select
      2      dfo_number,
      3      tq_id,
      4      server_type,
      5      process,
      6      num_rows,
      7      bytes,
      8      waits,
      9      timeouts,
    10      avg_latency,
    11      instance
    12  from
    13      v$pq_tqstat
    14  order by
    15      dfo_number,
    16      tq_id,
    17      server_type desc,
    18      process
    19  ;
    DFO_NUMBER      TQ_ID SERVER_TYP PROCES   NUM_ROWS      BYTES      WAITS   TIMEOUTS AVG_LATENCY   INSTANCE
             1          0 Producer   QC             51       4451          0          0           0          1
             1          1 Consumer   QC              1        683         14          6           0          1
    This time parallel DML has been used
    What If I create a trigger on the local_tab table and repeat the insert?
    SQL> create or replace trigger local_tab_trg
      2  before insert on local_tab
      3  for each row
      4  begin
      5   null;
      6  end;
      7  /
    Trigger created.
    SQL> insert /*+ append parallel(lcl) */ into local_tab lcl select /*+ parralel(dst) */ dst.* from distant_tab dst;
    51 rows created.
    SQL> select * from table(dbms_xplan.display_cursor);
    PLAN_TABLE_OUTPUT
    SQL_ID  1nhmuzb4ayq7x, child number 1
    insert /*+ append parallel(lcl) */ into local_tab lcl select /*+
    parralel(dst) */ dst.* from distant_tab dst
    Plan hash value: 1788691278
    | Id  | Operation                | Name          | Rows  | Bytes | Cost (%CPU)| Time     | Inst   |IN-OUT|
    |   0 | INSERT STATEMENT         |               |       |       |    57 (100)|          |        |      |
    |   1 |  LOAD TABLE CONVENTIONAL |               |       |       |            |          |        |      |
    |   2 |   REMOTE                 | distant_tab |    51 |  3009 |    57   (0)| 00:00:01 | XLCL~ | R->S |
    Remote SQL Information (identified by operation id):
       2 - SELECT /*+ OPAQUE_TRANSFORM */ "DST_ID","NAME_FR","NAME_NL","DST_UIC_CODE","DST_VOIC_CODE",
           "BUR_UIC_CODE","BUR_VOIC_CODE","INFO_NEEDED","TRANSFERRED","VALID_FROM_DATE","VALID_TO_DATE","S
           ORTING","SO_NEEDED" FROM "distant_tab" "DST" (accessing 'XLCL_XDST.WORLD' )
    Parallel run has been disabled by the existence of this trigger in both distant and local database
    SQL> drop trigger local_tab_trg;
    Trigger dropped.
    Now I want to test an insert using only the append hint as shown below
    SQL> insert /*+ append */ into local_tab lcl select /*+ parralel(dst) */ dst.* from distant_tab dst;
    51 rows created.
    SQL> select * from table(dbms_xplan.display_cursor);
    PLAN_TABLE_OUTPUT
    SQL_ID  4pkxbmy8410s9, child number 0
    insert /*+ append */ into local_tab lcl select /*+ parralel(dst) */
    dst.* from distant_tab dst
    Plan hash value: 2098243032
    | Id  | Operation        | Name          | Rows  | Bytes | Cost (%CPU)| Time     | Inst   |IN-OUT|
    |   0 | INSERT STATEMENT |               |       |       |    57 (100)|          |        |      |
    |   1 |  LOAD AS SELECT  |               |       |       |            |          |        |      |
    |   2 |   REMOTE         | distant_tab |    51 |  3009 |    57   (0)| 00:00:01 | XLCL~    | R->S |
    Remote SQL Information (identified by operation id):
       2 - SELECT /*+ OPAQUE_TRANSFORM */ "DST_ID","NAME_FR","NAME_NL","DST_UIC_CODE","DST_VOI
           C_CODE","BUR_UIC_CODE","BUR_VOIC_CODE","INFO_NEEDED","TRANSFERRED","VALID_FROM_DATE","V
           ALID_TO_DATE","SORTING","SO_NEEDED" FROM "distant_tab" "DST" (accessing
           'XLCL_XDST.WORLD' )
    SQL> select * from local_tab;
    select * from local_tab
    ERROR at line 1:
    ORA-12838: cannot read/modify an object after modifying it in parallel
    Question 1 : What does this last ORA-128838 means if the execution plan is not showing a parallel DML insert?
    Particularly when I repeat the same insert using a parallel hint without the append hint
    SQL> insert /*+ parallel(lcl) */ into local_tab lcl select /*+ parralel(dst) */ dst.* from distant_tab dst;
    51 rows created.
    SQL> select * from table(dbms_xplan.display_cursor);
    PLAN_TABLE_OUTPUT
    SQL_ID  40uqkc82n1mqn, child number 0
    insert /*+ parallel(lcl) */ into local_tab lcl select /*+ parralel(dst)
    */ dst.* from distant_tab dst
    Plan hash value: 2511483212
    | Id  | Operation               | Name          | Rows  | Bytes | Cost (%CPU)| Time     | TQ/Ins |IN-OUT| PQ Distrib |
    |   0 | INSERT STATEMENT        |               |       |       |    57 (100)|          |        |      |            |
    |   1 |  PX COORDINATOR         |               |       |       |            |          |        |      |            |
    |   2 |   PX SEND QC (RANDOM)   | :TQ10001      |    51 |  3009 |    57   (0)| 00:00:01 |  Q1,01 | P->S | QC (RAND)  |
    |   3 |    LOAD AS SELECT       |               |       |       |            |          |  Q1,01 | PCWP |            |
    |   4 |     PX RECEIVE          |               |    51 |  3009 |    57   (0)| 00:00:01 |  Q1,01 | PCWP |            |
    |   5 |      PX SEND ROUND-ROBIN| :TQ10000      |    51 |  3009 |    57   (0)| 00:00:01 |        | S->P | RND-ROBIN  |
    |   6 |       REMOTE            | distant_tab   |    51 |  3009 |    57   (0)| 00:00:01 | A1124~ | R->S |            |
    Remote SQL Information (identified by operation id):
       6 - SELECT /*+ OPAQUE_TRANSFORM */ "DST_ID","NAME_FR","NAME_NL","DST_UIC_CODE","DST_VOI
           C_CODE","BUR_UIC_CODE","BUR_VOIC_CODE","INFO_NEEDED","TRANSFERRED","VALID_FROM_DATE","V
           ALID_TO_DATE","SORTING","SO_NEEDED" FROM "distant_tab" "DST" (accessing'XLCL_XDST.WORLD' )
    SQL> select * from local_tab;
    select * from local_tab
    ERROR at line 1:
    ORA-12838: cannot read/modify an object after modifying it in parallel
    "in that particular case the REMOTE data was joined to some local data and there was an additional BUFFER SORT operation in the execution plan, which (in most cases) shows up when a serial to parallel distribution (S->P) is involved in an operation with further re-distribution of data, and the BUFFER SORT had to buffer all the data from remote before the local operation could continue - defeating any advantage of speeding up the local write operation by parallel DML. A serial direct path insert was actually faster in that particular case."
    Question 2 : Isn’t this a normal behavior of distributed DML where the driving site is always the site where the DML operation is done? And this is why the BUFFER SORT had to buffer all the data from remote to the driving site (local site here)?
    http://jonathanlewis.wordpress.com/2008/12/05/distributed-dml/
    Best regards
    Mohamed Houri

  • Append hint + ADO + Oracle OleDB Provider

    Hi everybody!
    This is my first post here in this great forum! ;-)
    I have a problem using Append hint with Oracle OleDB Provider and I've been searching internet for an answer without any luck.
    I'm trying to use Append hint with ADO + Oracle OleDB Provider (OraOLEDB.Oracle.1), like in the SQL below:
    INSERT /*+APPEND*/
    INTO my_table(field1, field2, field3)
    SELECT 0 field1, v.field2, v.field3)
    FROM my_second_table v
    The problem: Oracle is still creting log for this INSERT (It is working like there was no Append hint).
    If I use the same SQL statement with Microsoft Ole DB Provider for Oracle, the Append hint works as expected (log is not created), but doesn't work at all with Oracle DB Provider.
    Trace shows me that the SQL sentence is ok (the append hint is there!).
    I've tried Oracle servers 9.2 and/or 10g, and the problem is the same.
    Question: Does Append hint work with Oracle OleDB Provider? If yes, why it is not working? Something related with connection properties?
    Any help will be much appreciated!
    Thanks in advance.
    Alexandre Machado

    user8010279 wrote:
    Hi Solomon, thanks for you answer.
    Is the same SQL against the same database, with the same program, using ADO + OleDB Provider.
    The table is in NOLOGGING mode.
    When I use Microsoft OleDB Provider for Oracle there is no log creation. Then I disconnect and reconnect to the same server/database, using Oracle OleDB Provider. Then I execute the same SQL and.... there IS log creation, meaning that in that scenario, append hint is being ignored. I can't figure out WHY!!! :-(
    Alexandre,
    I'm not sure what you mean by "there is log creation". In general you need to distinguish between UNDO and REDO generation. A direct-path insert (APPEND hint) doesn't generate undo but still can generate redo, depending on the ARCHIVELOG / FORCE LOGGING mode of the tablespace resp. database and the LOGGING/NOLOGGING attribute of the table.
    Note that in case indexes exist on the table there will always be undo and therefore redo generation for the index maintenance as part of the direct-path insert.
    You should check V$SESSION (SQL_ID in 10g, SQL_ADDRESS + SQL_HASH_VALUE in pre-10g) and V$SQL in the database to double check if the SQL passed by the Oracle OLEDB Provider actually contains the APPEND hint in case the INSERT actually generates UNDO (which is the indicator that shows you if the direct-path insert is used or not). Whether it generates REDO is - as already mentioned - depending on other factors.
    So the question is how have you determined if the direct-path insert mode has been used or not?
    The simplest approach to test if direct-path insert mode is used or not is to issue a query on the object inserted into after the insert before committing the transaction. If it fails with "ORA-12838: cannot read/modify an object after modifying it in parallel" then you successfully inserted using direct-path insert.
    Note that there a number of restrictions that prevent the direct-path insert from happening, in those cases the APPEND hint will be silently ignored, e.g. enabled triggers, foreign keys on the table. A quite comprehensive list of restrictions is listed in the manuals here:
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28313/usingpe.htm#CACEJACE
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Remove the data which is recently loaded, but not disturbing the original

    Hello Experts!!
    I have few tables in Oracle 9i/10g , and they already have data in them. I am trying to migrate the data coming from variuos source systems into these Oracle tables. There is a chance that after loading I might get some unwanted data into these tables.
    How do I remove just that data which I have loaded recently, and do not distrub the original data it already has.
    Lot of people have adviced me to backup those tables and reload the data back if there is any problem, but I am looking at a different approach. I just dont want to change the existing system, as lot of users use the system.
    Could anyone help me solving this issue.
    Thanks,
    Vishal

    Visu wrote:
    How are you loading the data daily? Is it direct path?
    1) Capture the primary key or rowid information into interim table while loading data into target table (either trigger or stored procedure) - triggers will not be fired if the load is direct path, it is the feasible way otherwise as there will be no impact on the existing codeThat's not true. The trigger will disable the direct path, not the other way around.
    Simple proof.
    TUBBY_TUBBZ?create table for_direct_pathery
      2  (
      3     column1 number
      4  );
    Table created.
    Elapsed: 00:00:00.07
    TUBBY_TUBBZ?
    TUBBY_TUBBZ?insert --+ APPEND
      2  into for_direct_pathery
      3  select 1 from dual ;
    1 row created.
    Elapsed: 00:00:00.00
    TUBBY_TUBBZ?
    TUBBY_TUBBZ?select * from for_direct_pathery;
    select * from for_direct_pathery
    ERROR at line 1:
    ORA-12838: cannot read/modify an object after modifying it in parallel
    Elapsed: 00:00:00.01
    TUBBY_TUBBZ?
    TUBBY_TUBBZ?rollback;
    Rollback complete.
    Elapsed: 00:00:00.00
    TUBBY_TUBBZ?
    TUBBY_TUBBZ?create or replace trigger disable_direct_path_access
      2  before insert on for_direct_pathery
      3  begin
      4     null;
      5  end;
      6  /
    Trigger created.
    Elapsed: 00:00:00.09
    TUBBY_TUBBZ?
    TUBBY_TUBBZ?insert --+ APPEND
      2  into for_direct_pathery
      3  select 1 from dual ;
    1 row created.
    Elapsed: 00:00:00.00
    TUBBY_TUBBZ?
    TUBBY_TUBBZ?select * from for_direct_pathery;
               COLUMN1
                     1
    1 row selected.
    Elapsed: 00:00:00.00
    TUBBY_TUBBZ?Notice how you can select the table (since it wasn't direct path loaded) when the trigger is in place.

  • Procedure to delete a project in Workshop

    Hi All,
    In the current version of WebLogic Workshop IDE, we have not provided an
    option to delete a project.
    I have listed below the 3 steps you need to take to delete a project.
    Steps
    1. Delete EJB components corresponding to the JWS files in the project
    through the console.
    2. Edit the config.xml file to remove the WebAppComponent entry
    corresponding to the project.
    3. Delete the project directory from the applications directory.
    Reasoning
    The creation and building of a project, say "testProject", in Workshop
    leads to the addition of the following WebAppComponent entry in config.xml
    file:
    <Application Name="_appsdir_testProject_dir"
    Path="C:\bea\weblogic7\samples\workshop\applications"
    StagingMode="nostage" TwoPhase="true">
    <WebAppComponent Name="testProject" Targets="cgServer" URI="testProject"/>
    </Application>
    Evey JWS file built in the testProject, say "testJWS", creats the following
    EJBComponent entry in config.xml
    <Application Deployed="true" Name="testProject.testJWS_EJB"
    Path="C:\bea\weblogic7\samples\workshop\cgServer\.jwscompile\_jwsdir_testPro
    ject\EJB" TwoPhase="true">
    <EJBComponent Name="testJWSEJB" Targets="cgServer" URI="testJWSEJB.jar"/>
    </Application>
    To delete a project, these two or more entries should get removed from the
    config.xml file. The EJB corresponding to a JWS can be removed through the
    console. However, please note that if the server is running in the
    development mode, a webapp in the "applications" directory cannot be deleted
    through the console. Hence, the config.xml file needs to be manually edited
    to remove the WebAppComponent entry corresponding to a Workshop project
    (which is essentially a webapp). The physical project directory can then be
    deleted from the applications directory.
    I realise this is some hardwork, and we do plan to address this in version
    2.
    Please do let me know if you have any further queries regarding this issue.
    Regards,
    Anurag
    WebLogic Workshop Support

    955685 wrote:
    Hi
    OS RHEL 5.5
    Database 11.1.0.7
    Applications R12
    Initially I have created a project in oracle apps from wrong template and modified the template then created a project with the same modified template.
    Now the wrong project needs to be deleted so how to delete the project which had been created from wrong template in oracle apps R12 and integrated project with MSP.
    Both the wrong and correct projects are referring to the same template.
    Please help me to delete the project as it is done in production database.
    Rgrd
    Edited by: 955685 on Feb 4, 2013 8:10 AMPlease see these docs/links.
    Delete Project Functionality not Working [ID 1060648.1]
    PRC: Delete Project Performance Reporting Data Completes With Error ORA-12838 ORA-6512 [ID 748674.1]
    How Can the PA Period Type Be Changed if It Has Been Entered Incorrectly? [ID 881424.1]
    PAXPREPR Unable To Delete Project [ID 874642.1]
    http://www.oracle.com/pls/ebs121/search?word=Delete+AND+Project&format=ranked&remark=quick_search
    Thanks,
    Hussein

  • PL/SQL report errors: ORA-01422

    Hi all,
    (before i you read i would like to say i have searched the net for this error code but nothing shows up like this problem..)
    I am getting an error problem when i select certain Schemas from a list on an apex app. page, it only works for some schemas not all..
    When i select one schema, it is supposed to display one row.. when i select [ALL] it is supposed to show them all.
    It does work if i select '[ALL]' from the select list (p3_schema_name), just not for every single individual one.
    the error code:
    ORA-01422: exact fetch returns more than requested number of rows
    declare
      vSchema  varchar2(20);
      vStmt  varchar2(1000);
      vVersion number(5);
      vDBName  varchar2(20);
      vHostName varchar2(80);
      vStmt2  varchar2(1000);
      vVersion2 number(5);
      vDBName2  varchar2(20);
      vServer2 varchar2(80);
      vSchema2 varchar2(80);
      CURSOR c_schemas IS
        select owner from dba_tables@P3_DB_NAME.db_link where table_name = 'DDL_LOG' and num_rows > 0 order by owner;
    begin
      IF :P3_SCHEMA_NAME != '[ALL]' AND :P3_DB_NAME IS NOT NULL AND :P3_SERVER_NAME IS NOT NULL THEN
        vServer2 := :P3_SERVER_NAME;
        vSchema2 := :P3_SCHEMA_NAME;
          vStmt2 := 'select distinct DDH_DB_NM, max(DDH_SCHEMA_NR)keep(dense_rank last order by ddh_runstart_td) AS "PATCH" from &P3_SCHEMA_NAME..ddl_log@&P3_DB_NAME.db_link GROUP BY DDH_DB_NM';
          Execute Immediate vStmt2 into vDBName2, vVersion2;
            htp.p('<br>');
            htp.p('<table border="1">');
            htp.p('<tr>');
            htp.p('<th bgcolor="#FFCC99">SERVER NAME</th>');
            htp.p('<th bgcolor="#FFCC99">DB NAME</th>');
            htp.p('<th bgcolor="#FFCC99">SCHEMA NAME</th>');
            htp.p('<th bgcolor="#FFCC99">PATCH</th>');
            htp.p('</tr>');
            htp.p('<tr>');
            htp.p('<td>');
            htp.p(vServer2);
            htp.p('</td>');
            htp.p('<td>');
            htp.p(vDBName2);
            htp.p('</td>');
            htp.p('<td>');
            htp.p(vSchema2);
            htp.p('</td>');
            htp.p('<td>');
            htp.p(vVersion2);
            htp.p('</td>');
            htp.p('<td>');
            htp.p('<BR>');
            htp.p('</td>');
            htp.p('</tr>');
            htp.p('</tr>');
            htp.p('</table>');
       ELSE IF :P3_SCHEMA_NAME = '[ALL]' AND :P3_DB_NAME IS NOT NULL AND :P3_SERVER_NAME IS NOT NULL THEN
       vHostName := :P3_SERVER_NAME;
       vDBName := :P3_DB_NAME;
         open c_schemas;
          htp.p('<br>');
          htp.p('<table border="1">');
          htp.p('<tr>');
          htp.p('<th bgcolor="#FFCC99">SERVER NAME</th>');
          htp.p('<th bgcolor="#FFCC99">DB NAME</th>');
          htp.p('<th bgcolor="#FFCC99">SCHEMA NAME</th>');
          htp.p('<th bgcolor="#FFCC99">PATCH</th>');
          htp.p('</tr>');
        LOOP
          FETCH c_schemas INTO vSchema;
          EXIT WHEN c_schemas%NOTFOUND;
          vStmt  := 'select max(DDH_SCHEMA_NR)keep(dense_rank last order by ddh_runstart_td) AS "PATCH" from '||vSchema||'.ddl_log@&P3_DB_NAME.db_link where DDH_SCHEMA_NR = (select max(DDH_SCHEMA_NR) from '||vSchema||'.ddl_log@&P3_DB_NAME.db_link) and rownum < 2' ;
          Execute Immediate vStmt into vVersion  ;
          htp.p('<tr>');
          htp.p('<td>');
          htp.p(vHostName);
          htp.p('</td>');
          htp.p('<td>');
          htp.p(vDBName);
          htp.p('</td>');
          htp.p('<td>');
          htp.p(vSchema);
          htp.p('</td>');
          htp.p('<td>');
          htp.p(vVersion);
          htp.p('</td>');
          htp.p('<td>');
          htp.p('<BR>');
          htp.p('</td>');
          htp.p('</tr>');
        END LOOP;
          htp.p('</tr>');
          htp.p('</table>');  
      CLOSE c_schemas;
    END IF;
    END IF;
    END;I have checked the DDH_SCHEMA_NR for repeating entries of the highest number.. some of the ones that dont work do have repeating entries some don't.
    Sorry if this is confusing, i have tried to explain it as best as i can.
    Thanks in advance for any help.
    Ashleigh

    Hello Ashleigh,
    Based on your code, I'd start by running this piece of SQL via command-line (thru SQL Workshop, SQL*Plus, Toad, etc.), replacing &P3_SCHEMA_NAME. and &P3_DB_NAME. with values that are currently causing the routine to fail and see if it returns more than one row. I don't know your data, but DISTINCT and GROUP BY are typically used to return multiple (though grouped/summarized) rows. It appears to be the only statement that would cause the error your seeing (more than one row being returned into single variables).
    select distinct DDH_DB_NM, max(DDH_SCHEMA_NR)keep(dense_rank last order by ddh_runstart_td) AS "PATCH" from &P3_SCHEMA_NAME..ddl_log@&P3_DB_NAME.db_link GROUP BY DDH_DB_NM;I'm actually surprised that the code runs at all. I didn't think 'execute immediate' would know what to do with substitutions indicated as "&something." (I've typically seen that when substituting in dynamic HTML/Javascript code but maybe I'm learning something new). But since you already have vServer2 and vSchema2, I'd be more apt to code it as:
    vStmt2 := 'select distinct DDH_DB_NM, max(DDH_SCHEMA_NR)keep(dense_rank last order by ddh_runstart_td) AS "PATCH" from ' ||
    vSchema2 || '.ddl_log@' || vServer2 || '.db_link GROUP BY DDH_DB_NM';Hope this helps,
    John

Maybe you are looking for

  • How to create  index for a column of a view

    Hi, I have created view for a table and then i am trying to create index for a column of that view. i am using the query "CREATE INDEX index_name ON view_name (col)". but Mysql is showing error like "view_name is not a base table". How can i do that.

  • Error while running webdynpro application with an Interactive form

    Hi all, I am getting the following defect when I run a adobe interactive form (webdynpro application). Any pointers? Regards, Deepak 500   Internal Server Error Failed to process request. Please contact your system administrator. Error Summary While

  • Exporting Mark-Ups and Comments from a PDF file

    Hello, I know there are several options for exporting comments associated with highlighted text in Adobe Acrobat Standard, but is there a way to export both the comment AND the marked up text all together? Thank you.

  • Workflow for Creating Encore Menu in Photoshop

    This question deals with working between Encore and Photoshop CS5... I hope this is the proper Forum to post. I'll post in the Photoshop Forum as well... I'm creating a DVD menu from scratch in Photoshop CS5 to be used in authoring a DVD in Encore. 

  • Adobe reader 10.3 for iPad. I don't get the options to rename files, create folders, etc.

    PDF files downloaded from safari have weird names. I use open in, adobe reader, documents, click edit, click wrench icon. I get only the option to delete. I want to rename the file and organize it in folders. I don't see any option to create folders.