NAM data export using NDE

Hi,
Can some one help me in providing an example for all type of data collected by NAM using NDE export feature.
As per NAM's guide, NAM export following type of data.
Host
Application
Client Server Application
Application conversation
Network conversation
RTP Metrics
I just wanted to check all the data exported by NAM. An example for each type host,application etc.
Thanks,
Parul

Hi,
You can't exclude dimensions while taking a data export.Even if you don't specify a dimension in a fix statement then
it will export all the blocks related to the missing dimension(level 0 and upper level blocks)
@REMOVE will not remove the Dimension members from a FIX statement but it will remove the subset of the members fixed for a particular dimension.
EG: FIX(@REMOVE(@RELATIVE("Entity",0),"E1","E2")) where E1 and E2 are again level 0 members which you are fixing in the FIX statement. Here E1, and E2 will not be considered for any calculation or in your case for the export.
If you want to load the export from this cube to any other cube with less no. of dimensions than the current cube, you can use the load rules to load the data in the other cube by ignoring the columns corresponding to the dimension which are not there in the target cube.
Thanks,
CM

Similar Messages

  • InfoUser Master Data Export using scc8

    Hi Team,
    Iam doing User Master Data Export using scc8 .I have 1 question regarding this:
    1)When i took export using scc8 only 1 TR is created in cofiles folder and i cant see that TR in data-files folder.
    Can anybody help me in this??
    More info:
    We are going to rebuild the system so we need to preserve the user master record data.
    Regards,
    Abhilash
    Edited by: gundala$ on Feb 29, 2012 8:10 AM
    Edited by: gundala$ on Feb 29, 2012 8:10 AM

    Hi,
    Kindly go through the following link.
    http://forums.sdn.sap.com/thread.jspa?threadID=1310350
    Anil

  • Problem in data export using DisplayTag

    Hello Friends,
    I am getting the following exception when i try to export the data using display tag's built-in facility.
    [2008-02-26 16:54:27,472] WARN  http-7070-Processor22 (BaseNestableJspTagException.java:99  ) - Exception: [.LookupUtil] Error looking up property "mgrname" in object type "java.util.ArrayList". Cause: Unknown property 'mgrname'
    java.lang.NoSuchMethodException: Unknown property 'mgrname'
         at org.apache.commons.beanutils.PropertyUtilsBean.getSimpleProperty(PropertyUtilsBean.java:1122)
         at org.apache.commons.beanutils.PropertyUtils.getSimpleProperty(PropertyUtils.java:408)
         at org.displaytag.util.LookupUtil.getProperty(LookupUtil.java:271)
         at org.displaytag.util.LookupUtil.getBeanProperty(LookupUtil.java:129)
         at org.displaytag.model.Column.getValue(Column.java:124)
         at org.displaytag.export.BaseExportView.doExport(BaseExportView.java:265)
         at org.displaytag.tags.TableTag.writeExport(TableTag.java:1404)
         at org.displaytag.tags.TableTag.doExport(TableTag.java:1356)
         at org.displaytag.tags.TableTag.doEndTag(TableTag.java:1227)
         at org.apache.jsp.WEB_002dINF.jsps.common.tableViewTag_jsp._jspx_meth_displayTag_table_0(tableViewTag_jsp.java:195)
         at org.apache.jsp.WEB_002dINF.jsps.common.tableViewTag_jsp._jspService(tableViewTag_jsp.java:89)
         at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:97)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
    Now,it doesn't face any problem while displaying data in table form on the page but when i try to export it (csv,excel,xml) then it fires the above exception.This is bit surprising to me.
    The dispalytag related tags are in a jsp. This jsp is included in a Spring tag handler class by pageContext.include("xyz.jsp"). This tag(defined by the Spring Tag Handler class) is being used in another jsp where the table is displayed. Paging works perfectly, but when I click on export, the exception occurs.
    I am using the followings:
    JDK1.5,Displaytag 1.1 and Spring 1.2.7
    The Actual flow is something like this.
    1)Controller forwards the request to jsp page.
    2)This jsp page uses a custom tag.
    3)Now,the control goes to custom tag handler class where i set the all the data into request,
    pageContext.getRequest().setAttribute("tableViewTag_data", data);4)Then i have included the page like
    pageContext.include("/WEB-INF/jsps/common/xyz.jsp");5)This xyz.jsp contains the following code.
        <displayTag:table pagesize="10" requestURI="${cmd.metaClass}.htm" name="tableViewTag_data" class="displaytag" decorator="${tableViewTag_options['decorator']}" export="true">
             <displayTag:setProperty name="paging.banner.placement" value="top"/>
             <c:forEach var="property" varStatus="propertyStatus" items="${tableViewTag_columnProperties}">
                  <c:set var="propertyTitle"><fmt:message key="field.${cmd.metaClass}.${property}" /></c:set>
                  <displayTag:column property="${property}" title="${propertyTitle}" />
             </c:forEach>
        </displayTag:table>Here, I am able to retrieve all the data.
    5)So,in this way the page is getting rendered.
    I have also included export filter into web.xml file.
    Hope i have provided all the information.
    I think i haven't made any silly mistake. -:)
    Looking forward to hear from you.
    Thanks
    Vishal Pandya

    Hi,
    Expdb and Exp are different exporting utility of oracle and hence the output file sizes are not same, and so difference occurs.
    No this is not a problem
    Since this is not a problem and hence no solution.
    Why you see this as a problem
    Cheers
    Anurag

  • Data export(ttbulkcp) Oracle TimesTen Question

    I'm trying to export an Oracle TimesTen(TimesTen Release 11.2.1.5.0) with ttbulkcp(Data export), using SQL Developer Version 2.1.1.64
    All other functions are normal operation. Also ttbulkcp(Data export) in Oracle Table is normal operation. But , I get the following error in ttbulkcp(Data export) in TimesTen Table.
    java.lang.NullPointerException
         at oracle.dbtools.raptor.format.ResultsFormatterWrapper.getColumnCount(ResultsFormatterWrapper.java:67)
         at oracle.dbtools.raptor.format.ResultsFormatter.getColumnCount(ResultsFormatter.java:130)
         at oracle.dbtools.raptor.timesten.format.TimesTenLoaderFormatter.getColumns(TimesTenLoaderFormatter.java:207)
         at oracle.dbtools.raptor.timesten.format.TimesTenLoaderFormatter.printColumnData(TimesTenLoaderFormatter.java:183)
         at oracle.dbtools.raptor.timesten.format.TimesTenLoaderFormatter.start(TimesTenLoaderFormatter.java:73)
         at oracle.dbtools.raptor.format.ResultSetFormatterWrapper.print(ResultSetFormatterWrapper.java:150)
         at oracle.dbtools.raptor.format.ResultsFormatter.print(ResultsFormatter.java:200)
         at oracle.dbtools.raptor.format.ResultsFormatter.doPrint(ResultsFormatter.java:416)
         at oracle.dbtools.raptor.dialogs.actions.TableExportAction$5.doWork(TableExportAction.java:637)
         at oracle.dbtools.raptor.dialogs.actions.TableExportAction$5.doWork(TableExportAction.java:634)
         at oracle.dbtools.raptor.backgroundTask.RaptorTask.call(RaptorTask.java:193)
         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
         at oracle.dbtools.raptor.backgroundTask.RaptorTaskManager$RaptorFutureTask.run(RaptorTaskManager.java:492)
         at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
         at java.lang.Thread.run(Thread.java:619)
    Driver not capable
    Thank you.
    GooGyum

    If you have a DB support contract, I suggest you open a SR on Metalink/MOS, to get official response and follow-up, else you can hope someone from development picks it up here.
    Regards,
    K.

  • Query in File to File Data Export

    Hi,
    I have done a project for file to file data export using this link below:
    http://www.oracle.com/webfolder/technetwork/tutorials/obe/fmw/odi/odi_11g/odi_project_ff-to-ff/odi_project_flatfile-to-flatfile.htm
    My project works fine.Here the source file is in local m/c(C:\Oracle\Middleware\Oracle_ODI1\oracledi\demo\file ).
    1)What shall I do if the source file is in a remote m/c(says its' ip is 172.22.18.90)?
    2)will it cause any problem if the remote m/c's os is unix
    Thanks
    Papai
    NB:-My m/c's OS is windows 7.

    If you can access the other machine like a share folder then provide the same path in physical schema like \\my_other_pc_on_shared\new_folder
    If you cannot access like that then create one agent on that machine which can access the path and execute your project using the agent.
    Thanks
    Bhabani
    http://bhabaniranjan.com/

  • Batch change does not change meta data name when exporting or copy

    I use batch change to change the name of an event when photos from more than one source (camera) are imported. When I copy or export the photos the name reverts to the original meta data file. I do this to override duplicate names. For instance photos from two cameras taking pictures of the same event can have identical names such as IMG_0100. Iphoto does not show all the metadata on info but does show the change in the name when viewing info in iphoto.
    Is there some way to permanently change the name on photos using a group or batch method?

    No you're not changing any filenames at all.
    There is no way to change a filename in iPhoto at all. What you with Batch Change is add a Title to the photo. A Title is an entry in the metadata.
    You can set the name of an exported file to be the same as the title, when you export... if you choose that option.
    This User Tip
    https://discussions.apple.com/docs/DOC-4921
    has details of the options in the Export dialogue.

  • Why does iPhoto (9.0/11) not retain the Event name when exporting more than one event? (using File - Export - Album name with number).

    Why does iPhoto (9.0/11) not retain the Event name when exporting more than one event? (using File -> Export -> Album name with number).
    Exporting a single Event retains the Event name which is what I'd expect. But highlighting more than one event and exporting it renames the images to Events 001.JPG, Event 002.JPG etc.
    I was recently on holidays and had all my events nicely split on Dad's computer but when I went to export it I couldn't retain any of this information. Now I have to replicate this all again on my computer.
    It wasn't possible to export the entire library as the external drive was fat32 format an I didn't want all of it. It would be nice to export a bunch of events to someone and have it retain the name.
    Does anyone have a work around or will this be fixed at some point by Apple?

    Why does iPhoto (9.0/11) not retain the Event name when exporting more than one event? (using File -> Export -> Album name with number).
    Exporting a single Event retains the Event name which is what I'd expect. But highlighting more than one event and exporting it renames the images to Events 001.JPG, Event 002.JPG etc.
    I was recently on holidays and had all my events nicely split on Dad's computer but when I went to export it I couldn't retain any of this information. Now I have to replicate this all again on my computer.
    It wasn't possible to export the entire library as the external drive was fat32 format an I didn't want all of it. It would be nice to export a bunch of events to someone and have it retain the name.
    Does anyone have a work around or will this be fixed at some point by Apple?

  • File name that exports enlarges itself using   dmexpimpdemo.java

    Hi everyone,
    When I use dmexpimpdemo.java and I send to export putting him a name file this is kept me with the name enlarged 001, my question is like I can do so that be kept me with the name that I put him without 001
    Example ----------> FileName
    it is kept like ------->     FileName001
    I appreciate your aids.

    Export uses the user supplied file name as prefix and appends suffixes to those files. As the exported content can be in multiple-files it does this way.
    Hope this helps
    Sunil

  • Been using Macs since classic. Using MBP, OS 10.7.5. Trying to figure out how to print/save folder contents (Name, Date Modified, Size Kind) of entire (non-visible) folder? Any apps that can do this?

    Been using Macs since before classic. Using MBP, OS 10.7.5. Trying to figure out how to print/save folder contents including what's not visible on screen (Name, Date Modified, Size Kind) in entire folder?
    Screen shots are getting old hat.
    Tried select all, copy, paste into pages, but it didn't display date modified, size & kind, and also included images of selected files, which is too large. 
    Any apps/shortcuts/utilities that can do this?  Thanks in advance.

    Hello Achates:
    I did not read the rather long post. If you wish to reinstall OS X 10.4, use your software install DVD. Backup is essential. To minimize your risk, I would use an archive and install:
    http://docs.info.apple.com/article.html?artnum=107120
    In that way, you will have a fresh copy of OS X and your current settings will be preserved.
    Incidentally, I do not agree that the printer problem is best solved by reinstalling OS X. I have had HP printers for sometime and, on one occasion, had difficulty after an upgrade. HP technical support walked me through uninstalling all traces of the HP driver and then reinstalling.
    Barry

  • Transpose Distinct Date and Use as Column name

    All,
    I am trying to transpose a distinct date and use it as a column name and list the data below it. My version of Oracle does not have the pivot function. Plus I am having trouble with figuring out how to alter the table name to the distinct date. Can someone provide me with logic that will dynamically take the current format (see below) and transpose it to the needed format (see below)?
    Current format:
    WEEK_END_DATE     RD     STORE_NUMBER     RANK
    09-19-2009     R0011     00505     6
    09-19-2009     R0028     00097     97
    09-19-2009     R0057     01801     72
    09-19-2009     R0061     04775     72
    09-19-2009     R0068     03920     66
    09-26-2009     R0011     00505     8
    09-26-2009     R0028     00097     50
    09-26-2009     R0057     01801     120
    09-26-2009     R0061     04775     30
    09-26-2009     R0068     03920     1
    nth date
    The format I need:
    RD     STORE_NUMBER     09-19-2009 09-26-2009 nth date....
    R0011     00505     6 8
    R0028     00097     97 50
    R0057     01801     72 120
    R0061     04775     72 30
    R0068     03920     66 1

    I apprreciate your help. I have tried to implement the dynamic pivot logic. I can create the correct case statement and change the column name to the name I need.
    However, when I try to use @@dynamic_pivot.sql I get the following error:
    ORA-04054: database link ORAEDW@DYNAMIC_PIVOT_SUBSCRIPT does not exist
    I have tried it three ways...
    @@dynamic_pivot
    and @@dynamic_pivot.sql
    and @@c:\dynamic_pivot_subscript.sql
    I don't have direct access to the database. I have to run my queries from toad. When I cut and paste the generated case statements, It works. I am just not able to dynamiclly insert those case statements into a select. Do you have any suggestion?
    -- ***** Start of dynamic_pivot.sql *****
    -- Suppress SQL*Plus features that interfere with raw output
    set feedback off;
    set heading off;
    SPOOL     c:\dynamic_pivot_subscript.sql
    SELECT
    DISTINCT',max(CASE WHEN week_end_date = '''|| week_end_date || ''' THEN rank END) AS '|| 'DATE_AS_OF_' ||
    to_char(week_end_date,'MM_DD_YYYY') AS case
    FROM
    test_ptw_bottom_10;
    SPOOL OFF
    -- Restore SQL*Plus features suppressed earlier
    What I need:
    SELECT     
    rd,
    store_number
    @@c:\dynamic_pivot_subscript.sql
    FROM     
    test_ptw_bottom_10
    where
    week_end_date in ('19-SEP-09','26-sep09')
    group by
    rd,store_number
    ORDER BY     
    rd;
    What works:
    SELECT     
    rd,
    store_number
    ,max(CASE WHEN week_end_date = '26-SEP-09' THEN rank END) AS DATE_AS_OF_09_26_2009
    ,max(CASE WHEN week_end_date = '19-SEP-09' THEN rank END) AS DATE_AS_OF_09_19_2009
    FROM     
    test_ptw_bottom_10
    where
    week_end_date in ('19-SEP-09','26-sep09')
    group by
    rd,store_number
    ORDER BY     
    rd;
    Edited by: user10609947 on Oct 5, 2009 2:36 PM
    Edited by: user10609947 on Oct 5, 2009 2:38 PM

  • Format of NAM monitoring export data

    Hi specialists
    Does NAM can export the monitoring date by XML???
    And who have the format list for this??
    Thanks very much!
    QxiangZh

    In find in NAM software 5.1 user guiade:
    Here are only two formats: HTML and CSV.
    Hope someone can help to confirm.
    Thanks!
    QxiangZh

  • Unable to do data form export using FormDEfUtil.cmd

    Hi all,
    I'm not able to do the data form export using FormDEfUtil.cmd.
    D:\hyperion\planning\bin\FormDEfUtil.cmd export <formname> localhost admin password <appname>
    I get this below message when tried the above script in cmd prompt.
    usage: HspFormDefUtil <import/export> <filename/formname/-all> <server> <username> <password> <application>.
    Any help is appreciable.
    Thanks!!
    Rev

    It works..but still i dint get the .xml file..when checked the log,it is updated as below,
    hbrserver.log:
    +2010-08-17 07:54:44,346 WARN main com.hyperion.hbr.security.HbrSecurityAPI - Error retrieving user by identity+
    FormDefUtil.log:
    Single sign on validation failed.
    there could be any prob in authentication.??
    Thanks!!
    Rev

  • Issue found in EHS in using specification data import using  process

    Dear EHS community
    Now using EHS classic for a long time a issue has been detected in EHS standard import. During maintenance of EHS data normally using CG02 the system is always using the default "Data origin" specified in customizing to be stored in EHS tables (e.g. ESTRH, ESTRI etc.). In standard process to import specification data one can define a different "Data origin". Now we are using an import file with default" data origin and executed the import. Now a strange effect has been detected (and not always) for update of identifiers. For the import purpose you must nominate at least one identifier. If the identifier is found then normally no update happens but only the value assignment data is inserted (or updated). If the identifier is not found it get be inserted on spec level in ESTRI. Now during the update the "Data origin" of the identifier present in the system (and which matched to identifier on file level) was changed but not the identifier as such. Any data record on value assignment level received the defautl data origin. Actually there is no explanation for this behaviour. If Default "Data origin" would be "SAP" (as the term) this value has been change to "Space". Any explanation of this effect is appreciated (or and idea regarding that).
    C.B.
    PS: analysis of change logs in EHS etc. executed so far clearly indictae that an "Update" happened on the identifier; but only the field SRSID is effected; EHS import is quite old and therefore very stable;
    PPS: I found a thread taking about the import file:
    spec import_inheritance data
    Example shown thre is like:
    +BS
    +BV   $ESTRH
    SRSID                          EH&S
    SUBID                          000000385000
    SUBCAT                         REAL_SUB
    AUTHGRP                        EHS_PS
    +EV
    +BV   $ESTRI
    SRSID                          EH&S
    IDTYPE                         NAM
    IDCAT                          EHS_OLD
    IDENT                          XY0002
    ORD                            0001
    +EV
    +BV   SAP_EHS_1013_001
    $ESTVA-SRSID                   EH&S
    SAP_EHS_1013_001_VALUE         N09.00101280
    +EV
    If you compare SAP helpt normally only at the"begining pof the file you will find "SRSID" Here this field is nominated often. On Level of ESTRH as well as ESTRI.
    PPS: e.g. refer to: TCG56 EHS: Data Origin - SAP Table - ABAP

    Dear Ralph
    first thanks for feedback. Regarding content provider: we need to check that on deeper level
    Regarding import may be my explanation was not good enough.
    Imagine this case:
    you have a specification in the system there you would like to add e.g. denisity data.. To do so you need at least one identifier which you must nominate during the import. As long as this identifier is "identical" in system and in the file this identifier should not be "changed/effected" etc. and only the additional data should be loaded, This was the process we used. Now we detected that this seems not to be the real effect. The identifier as part of the file is "updated" in EH&S. In the example above somebody used this logic:
    +BV   $ESTRI
    SRSID                          EH&S
    IDTYPE                         NAM
    IDCAT                          EHS_OLD
    IDENT                          XY0002
    ORD                            0001
    +EV
    Nearly the same is used in our process. Only difference ist, that we do not define the "SRSID". The same is true for any other data in the file. SRSID is nether specified.
    What is happing now:
    e.g. the "density" data is added with SRSID "EH&S". This effect is "normal". As by default SRSID should be EH&S (as this is defined as such in customizing) and because of the fact that at the top the ID is es well "EH&S". In the system we have the "XY0002" having SRSID EH&S. By using now this upload approach the only difference afterwards is tht kin th systeM; XY0002 does get "blank" as SRSID (and there is no data origin "blank" defined. Up to today my unertsanding was clearly: no update should happen in the identifier. This seems not to be the case. Is my understanding here different? Or is this SRSID in the load file is really mandatory in ESTRI level to avoid this effect. I hope that you can provide some feedback regarding this.
    C.B.
    PS: referring to: Example: Transfer File for Specifications - Basic Data and Tools (EHS-BD) - SAP Library
    The header of import file should look like:
    Comment
    +C
    Administrative section
    Character standard
    +SC
    ISO-R/3
    Identification (database name)
    +ID
    IUCLID
    Format version
    +V
    2.21
    Export date
    +D
    19960304
    Key date for export
    +VD
    19960304
    Set languages for export
    +SL
    E
    Date format
    +DF
    DD.MM.YYYY
    IN our case +ID = EH&S (as this is the value in export file)
    IN this example this additional one is shown:
    Begin table
    +BV
    $ESTRI
      Table field
    IDTYPE
    NAM
      Table field
    IDCAT
    IUPAC
      Table field
    IDENT
    anisole
      Table field
    LANGU
    E
      Table field
    OWNID
    ID1
    Therefore no SRSID is specified. And this is the data in our file (on high level) and the "only" change is that the identifer get "deleted" the SRSID

  • Schema Export using DBMS_DATAPUMP is extremely slow

    Hi,
    I created a procedure that duplicates a schema within a given database by first exporting the schema to a dump file using DBMS_DATAPUMP and then imports the same file (can't use network link because it fails most of the time).
    My problem is that a regular schema datapump export takes about 1.5 minutes whereas the export using dbms_datapump takes about 10 times longer - something in the range of 14 minutes.
    here is the code of the procedure that duplicates the schema:
    CREATE OR REPLACE PROCEDURE MOR_DBA.copy_schema3 (
                                              source_schema in varchar2,
                                              destination_schema in varchar2,
                                              include_data in number default 0,
                                              new_password in varchar2 default null,
                                              new_tablespace in varchar2 default null
                                            ) as
      h   number;
      js  varchar2(9); -- COMPLETED or STOPPED
      q   varchar2(1) := chr(39);
      v_old_tablespace varchar2(30);
      v_table_name varchar2(30);
    BEGIN
       /* open a new schema level export job */
       h := dbms_datapump.open ('EXPORT',  'SCHEMA');
       /* attach a file to the operation */
       DBMS_DATAPUMP.ADD_FILE (h, 'COPY_SCHEMA_EXP' ||copy_schema_unique_counter.NEXTVAL || '.DMP', 'LOCAL_DATAPUMP_DIR');
       /* restrict to the schema we want to copy */
       dbms_datapump.metadata_filter (h, 'SCHEMA_LIST',q||source_schema||q);
       /* apply the data filter if we don't want to copy the data */
       IF include_data = 0 THEN
          dbms_datapump.data_filter(h,'INCLUDE_ROWS',0);
       END IF;
       /* start the job */
       dbms_datapump.start_job(h);
       /* wait for the job to finish */
       dbms_datapump.wait_for_job(h, js);
       /* detach the job handle and free the resources */
       dbms_datapump.detach(h);
       /* open a new schema level import job */
       h := dbms_datapump.open ('IMPORT',  'SCHEMA');
       /* attach a file to the operation */
       DBMS_DATAPUMP.ADD_FILE (h, 'COPY_SCHEMA_EXP' ||copy_schema_unique_counter.CURRVAL || '.DMP', 'LOCAL_DATAPUMP_DIR');
       /* restrict to the schema we want to copy */
       dbms_datapump.metadata_filter (h, 'SCHEMA_LIST',q||source_schema||q);
       /* remap the importing schema name to the schema we want to create */     
       dbms_datapump.metadata_remap(h,'REMAP_SCHEMA',source_schema,destination_schema);
       /* remap the tablespace if needed */
       IF new_tablespace IS NOT NULL THEN
          select default_tablespace
          into v_old_tablespace
          from dba_users
          where username=source_schema;
          dbms_datapump.metadata_remap(h,'REMAP_TABLESPACE', v_old_tablespace, new_tablespace);
       END IF;
       /* apply the data filter if we don't want to copy the data */
       IF include_data = 0 THEN
          dbms_datapump.data_filter(h,'INCLUDE_ROWS',0);
       END IF;
       /* start the job */
       dbms_datapump.start_job(h);
       /* wait for the job to finish */
       dbms_datapump.wait_for_job(h, js);
       /* detach the job handle and free the resources */
       dbms_datapump.detach(h);
       /* change the password as the new user has the same password hash as the old user,
       which means the new user can't login! */
       execute immediate 'alter user '||destination_schema||' identified by '||NVL(new_password, destination_schema);
       /* finally, remove the dump file */
       utl_file.fremove('LOCAL_DATAPUMP_DIR','COPY_SCHEMA_EXP' ||copy_schema_unique_counter.CURRVAL|| '.DMP');
    /*EXCEPTION
       WHEN OTHERS THEN    --CLEAN UP IF SOMETHING GOES WRONG
          SELECT t.table_name
          INTO v_table_name
          FROM user_tables t, user_datapump_jobs j
          WHERE t.table_name=j.job_name
          AND j.state='NOT RUNNING';
          execute immediate 'DROP TABLE  ' || v_table_name || ' PURGE';
          RAISE;*/
    end copy_schema3;
    /The import part of the procedure takes about 2 minutes which is the same time a regular dp import takes on the same schema.
    If I disable the import completely it (the export) still takes about 14 minutes.
    Does anyone know why the export using dbms_datapump takes so long for exporting?
    thanks.

    Hi,
    I did a tkprof on the DM trace file and this is what I found:
    Trace file: D:\Oracle\diag\rdbms\instanceid\instanceid\trace\instanceid_dm00_8004.trc
    Sort options: prsela  execpu  fchela 
    count    = number of times OCI procedure was executed
    cpu      = cpu time in seconds executing
    elapsed  = elapsed time in seconds executing
    disk     = number of physical reads of buffers from disk
    query    = number of buffers gotten for consistent read
    current  = number of buffers gotten in current mode (usually for update)
    rows     = number of rows processed by the fetch or execute call
    SQL ID: bjf05cwcj5s6p
    Plan Hash: 0
    BEGIN :1 := sys.kupc$que_int.receive(:2); END;
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        3      0.00       0.00          0          0          0           0
    Execute    229      1.26     939.00         10       2445          0          66
    Fetch        0      0.00       0.00          0          0          0           0
    total      232      1.26     939.00         10       2445          0          66
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 2)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      wait for unread message on broadcast channel
                                                    949        1.01        936.39
    ********************************************************************************what does "wait for unread message on broadcast channel" mean and why did it take 939 seconds (more than 15 minutes) ?

  • Can I export using a single dump directory to multiple locations in oracle

    I'm trying to do a full database export using expdp utility in oracle 10g. I have a single dump directory that is mapped to a particular file location say /export/dump. I don't want the entire dump file to be stored in the above path. Instead I want the dump to be distributed among multiple files. I know that this can be done using FILESIZE parameter which will distribute the contents to multiple files according to the size we have specified.
    My problems comes here, I want to export my data to multiple locations, the path is different than what I mentioned above, say /first/dump. Now my question is should I create multiple dump directories for each location before exporting or can I omit directory attribute in expdp and specify the complete path in the FILE parameter itself.

    No. EXPDP needs the server component LOGICAL DIRECTORY. If you don't specify the directory, it will go to the default EXPDP path which will be mostly /rdbms/log. It's defined by the parameters DATA_PUMP_DIR.
    You will have to specify directory attribute if you want to point your dumpfile to go to any specific location and you cannot give the directory path in the file name in EXPDP (unlike conventional exp)

Maybe you are looking for

  • Sending attachments in Mail is Very, Very Difficult

    I have sent numerous notes on this before, but it is INCREDIBLY annoying when family, friends, business associates and more continue to ask me to RE_SEND attachments because they cannot see, view, download my attachments. This was NEVER-EVER an issue

  • Open password protected worksheet in E71

    Hello whenever i try to open any password protected file on my phone it passes me error. It also passes different error for two files viz: unable to open password protected file unable to open file I have quick office version 4.1.37.3 Which version w

  • Upgrade vs. Update

    We are running BOE XI 3.0 and are investigating how to upgrade to version 3.1, in doing so we came across a terminology problem. Some documents refers to the BOE XI 3.1 upgrade software whereas other refers to the BOE XI 3.1 update software (e.g. use

  • Header and body in seperate frames... why?

    Hi I am developing an embedded application that works with Weblogic Server. I have a problem with the response form a simple "GET" request, in that the response comes back from the server in 2 TCP frames. The first frame contains the Http header and

  • How to add non printable character in label

    hi , I want to add white space or tab in label of output label thanks