Duplicate a schema in XE

A co-worker is looking to duplicate our schema in XE. By duplicate, i mean a copy of the TABLEs, CONSTRAINTs, INDEXes, VIEWs, and SEQUENCEs, but not the data. This should only need to be done once.
From:Oracle9i Enterprise Edition Release 9.2.0.8.0
To: Oracle Database 10g Express Edition Release 10.2.0.1.0
We do not have all rights in the 9i DB.
How do i go about copying the schema?

Hi,
>>The command i used was (from Windows): exp user/pass@tnsname TABLES=(user.tablename) ROWS=N LOG=c:\a\log.txt FILE=c:\a\exp.txt
Keep in mind that you are using the TABLE clause. In this case, only tables will be exported. So, views, functions, sequences ... won't be exportted. Therefore, I advise you to perform the export below:
exp user/password file=user.dmp rows=n
>>EXP-00009: no privilege to export <owner>'s table <table>
What user are you using to take the export?
C:\>exp a/a file=test tables=b.emp
Export: Release 10.1.0.2.0 - Production on Fri Apr 11 15:23:17 2008
Copyright (c) 1982, 2004, Oracle.  All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
server uses WE8ISO8859P1 character set (possible charset conversion)
About to export specified tables via Conventional Path ...
EXP-00009: no privilege to export A's table EMP
Export terminated successfully with warnings.
C:\>exp system/password file=test tables=b.emp
Export: Release 10.1.0.2.0 - Production on Fri Apr 11 15:23:36 2008
Copyright (c) 1982, 2004, Oracle.  All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
server uses WE8ISO8859P1 character set (possible charset conversion)
About to export specified tables via Conventional Path ...
Current user changed to B
. . exporting table                        EMP         88 rows exported
Export terminated successfully without warnings.
C:\>exp a/a file=test tables=emp
Export: Release 10.1.0.2.0 - Production on Fri Apr 11 15:23:36 2008
Copyright (c) 1982, 2004, Oracle.  All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
server uses WE8ISO8859P1 character set (possible charset conversion)
About to export specified tables via Conventional Path ...
. . exporting table                        EMP         88 rows exported
Export terminated successfully without warnings.Cheers
Legatti

Similar Messages

  • Duplicate huge schema

    Hi all,
    I have a schema that contains 2000 GB of data.
    What is the best practice to duplicate the schema?
    Thanks
    dyahav

    Hi all,
    Thanks for the quick responses and great links.
    I have to copy one schema of 2000 GB in the same machine and I can do it over a night (which means that it can be issued offline)
    Actually, I have no experience in such schema size and I have to estimate the time required to do that and to find the best technique.
    1. Do you know how to estimate the time required?
    2. Is there any other technique to copy a schema?
    3. If I have to copy the schema to other machine (identical HW and SW), is there other technique?
    Thanks!
    dyahav

  • Schema Export using DBMS_DATAPUMP is extremely slow

    Hi,
    I created a procedure that duplicates a schema within a given database by first exporting the schema to a dump file using DBMS_DATAPUMP and then imports the same file (can't use network link because it fails most of the time).
    My problem is that a regular schema datapump export takes about 1.5 minutes whereas the export using dbms_datapump takes about 10 times longer - something in the range of 14 minutes.
    here is the code of the procedure that duplicates the schema:
    CREATE OR REPLACE PROCEDURE MOR_DBA.copy_schema3 (
                                              source_schema in varchar2,
                                              destination_schema in varchar2,
                                              include_data in number default 0,
                                              new_password in varchar2 default null,
                                              new_tablespace in varchar2 default null
                                            ) as
      h   number;
      js  varchar2(9); -- COMPLETED or STOPPED
      q   varchar2(1) := chr(39);
      v_old_tablespace varchar2(30);
      v_table_name varchar2(30);
    BEGIN
       /* open a new schema level export job */
       h := dbms_datapump.open ('EXPORT',  'SCHEMA');
       /* attach a file to the operation */
       DBMS_DATAPUMP.ADD_FILE (h, 'COPY_SCHEMA_EXP' ||copy_schema_unique_counter.NEXTVAL || '.DMP', 'LOCAL_DATAPUMP_DIR');
       /* restrict to the schema we want to copy */
       dbms_datapump.metadata_filter (h, 'SCHEMA_LIST',q||source_schema||q);
       /* apply the data filter if we don't want to copy the data */
       IF include_data = 0 THEN
          dbms_datapump.data_filter(h,'INCLUDE_ROWS',0);
       END IF;
       /* start the job */
       dbms_datapump.start_job(h);
       /* wait for the job to finish */
       dbms_datapump.wait_for_job(h, js);
       /* detach the job handle and free the resources */
       dbms_datapump.detach(h);
       /* open a new schema level import job */
       h := dbms_datapump.open ('IMPORT',  'SCHEMA');
       /* attach a file to the operation */
       DBMS_DATAPUMP.ADD_FILE (h, 'COPY_SCHEMA_EXP' ||copy_schema_unique_counter.CURRVAL || '.DMP', 'LOCAL_DATAPUMP_DIR');
       /* restrict to the schema we want to copy */
       dbms_datapump.metadata_filter (h, 'SCHEMA_LIST',q||source_schema||q);
       /* remap the importing schema name to the schema we want to create */     
       dbms_datapump.metadata_remap(h,'REMAP_SCHEMA',source_schema,destination_schema);
       /* remap the tablespace if needed */
       IF new_tablespace IS NOT NULL THEN
          select default_tablespace
          into v_old_tablespace
          from dba_users
          where username=source_schema;
          dbms_datapump.metadata_remap(h,'REMAP_TABLESPACE', v_old_tablespace, new_tablespace);
       END IF;
       /* apply the data filter if we don't want to copy the data */
       IF include_data = 0 THEN
          dbms_datapump.data_filter(h,'INCLUDE_ROWS',0);
       END IF;
       /* start the job */
       dbms_datapump.start_job(h);
       /* wait for the job to finish */
       dbms_datapump.wait_for_job(h, js);
       /* detach the job handle and free the resources */
       dbms_datapump.detach(h);
       /* change the password as the new user has the same password hash as the old user,
       which means the new user can't login! */
       execute immediate 'alter user '||destination_schema||' identified by '||NVL(new_password, destination_schema);
       /* finally, remove the dump file */
       utl_file.fremove('LOCAL_DATAPUMP_DIR','COPY_SCHEMA_EXP' ||copy_schema_unique_counter.CURRVAL|| '.DMP');
    /*EXCEPTION
       WHEN OTHERS THEN    --CLEAN UP IF SOMETHING GOES WRONG
          SELECT t.table_name
          INTO v_table_name
          FROM user_tables t, user_datapump_jobs j
          WHERE t.table_name=j.job_name
          AND j.state='NOT RUNNING';
          execute immediate 'DROP TABLE  ' || v_table_name || ' PURGE';
          RAISE;*/
    end copy_schema3;
    /The import part of the procedure takes about 2 minutes which is the same time a regular dp import takes on the same schema.
    If I disable the import completely it (the export) still takes about 14 minutes.
    Does anyone know why the export using dbms_datapump takes so long for exporting?
    thanks.

    Hi,
    I did a tkprof on the DM trace file and this is what I found:
    Trace file: D:\Oracle\diag\rdbms\instanceid\instanceid\trace\instanceid_dm00_8004.trc
    Sort options: prsela  execpu  fchela 
    count    = number of times OCI procedure was executed
    cpu      = cpu time in seconds executing
    elapsed  = elapsed time in seconds executing
    disk     = number of physical reads of buffers from disk
    query    = number of buffers gotten for consistent read
    current  = number of buffers gotten in current mode (usually for update)
    rows     = number of rows processed by the fetch or execute call
    SQL ID: bjf05cwcj5s6p
    Plan Hash: 0
    BEGIN :1 := sys.kupc$que_int.receive(:2); END;
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        3      0.00       0.00          0          0          0           0
    Execute    229      1.26     939.00         10       2445          0          66
    Fetch        0      0.00       0.00          0          0          0           0
    total      232      1.26     939.00         10       2445          0          66
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 2)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      wait for unread message on broadcast channel
                                                    949        1.01        936.39
    ********************************************************************************what does "wait for unread message on broadcast channel" mean and why did it take 939 seconds (more than 15 minutes) ?

  • Sample Schema's

    Hi Every one,
    just installed the oracle 11g enterprise version on my laptop
    all is good and successfully connected the data base in sqldeveloper. but i am not able to see the HR schema all i see is something different.
    can any one help me regarding this issue?
    Thank
    Any info much appreciated
    cheers

    Pl do not post duplicates - Sample Schema download
    Continue the discussion in your original thread

  • 100% Complete Schema Duplication

    100% Complete Schema Duplication
    10gR2
    Need to duplicate a schema 100%, including Pkg, Types, all objects generated
    by XDB etc.
    Does expdp/impdp do that ?
    (or what does expdp/impdp skip, if any ?)
    thanks

    Schema level export should do it. It gets all of them owned by that user/schema.
    Thanks
    chandra

  • Images (w/correct meta data) are in catalog and on disk, but LR 5.7 considers them new on Import

    For reasons explained below, I want to try to re-import all my images into LR and hope that none/few are in fact considered new and are imported.  Yet, for some folders, LR is apparently unable to detect that my source images are already in the catalog, and are on disk, and that the source file meta data matches what LR knows about the images.  When I click an image in LR and Show in Finder, I do see the imported image on disk.  I can edit the image in the Develop module.  So, it seems good, but all is not well.   Sorry for the long post here, but I wanted to provide as much info as I could, as I am really seeking your help, which I'd very much appreciate.
    Here are some screen shots that illustrate the problem:
    Finder contents of the original images
    LR folder hierarchy
    an image as seen in LR
    Finder content of external LR copy of images
    import showing 10 "new" photos
    The original images ... (I'm not sure why the file date is April 2001 but the actual image date is January 2011; I may have just used the wrong date on the folder name?)
    The LR folder hierarchy ...
    An image as seen in LR ...
    The external folder containing the images in the LR library
    But on import of the original source folder, LR sees 10 "new" photos ...
    I tried "Synchronize Folder ..." on this particular folder, and it simply hangs half-way through as seen in the screen shot below.   IS THIS AN LR BUG?   This is really odd, since "Synchronize Folder ..." on the top-level folder completes quickly.
    I have a spreadsheet of of the EXIF data for the original files and those created by LR.  (I extracted this info using the excellent and free pyExifToolGui graphical frontend for the command line tool ExifTool by Phil Harvey.)   Almost all of the Exif data is the same, but LR has added some additional info to the files after import, including (of course) keywords.  However, I would not have expected the differences I found to enter into the duplicate detection scheme.  (I didn't see a way to attach the spreadsheet to this posting as it's not an "image".)
    I'm running LR 5.7 on a 27" iMac with Yosemite 10.10.2, having used LR since LR2.  I have all my original images (.JPEGs and RAWs of various flavors) on my internal drive on the Mac.   To me this is like saving all my memory cards and never re-using them.   Fortunately, these files are backed up several ways.   I import these images (copying RAWs as DNG) into LR with a renaming scheme that includes the import number, original file creation date and original file name.   There should be one LR folder for each original source file folder, with the identical folder name (usually a place and date).  I store the LR catalog and imported images on an external drive.  Amazingly and unfortunately my external drive failed as did it's twin, same make/size drive that I used as a backup with Carbon Copy Cloner.   I used Data Rescue 4 to recover to a new disk what I thought was almost all of the files on the external drive.
    So, I thought all would be well, but, when I tried "Synchronize Folder" using the top-level folder of my catalog, the dialog box appeared saying there were over 1000 "New" photos that had not been imported.  This made be suspicious that I had failed to recover everything.   But actually things are much worse than I thought..   I have these counts of images:
    80,0061 files in 217 folders for my original source files (some of these may be (temporary?) copies that I actually don't want to import into LR)
    51,780 files in 187 folders on my external drive containing the LR photo library
    49,254 images in the top-level folder in the LR catalog (why different from the external file count?)
    35,332 images found during import of the top-level folder containing original images
    22,560 images found as "new" by LR during import
    1,074 "new" images reported by Synchronize Folder ... on the top-level folder in the catalog; different from import count
    Clearly things are badly out of sync.   I'd like to be sure I have all my images in LR, but none duplicated.   Thus, I want to try to import the entire library and have LR tell me which photos are new.  I have over 200 folders in LR.  I am now proceeding to try importing each folder, one at a time, to try to reconcile the differences and import the truly missing images.  This will be painful.  And it may not be enough to fully resolve the above discrepancies.
    Does anyone have any ideas or suggestions?  I'd really appreciate your help!
    Ken

    Thanks for being on the case, dj!   As you'll see below, YOU WERE RIGHT!      But I am confused.
        1. Does the same problem exist if you try to import (not synchronize) from that folder? In other words, does import improperly think these are not duplic
    YES.  Import improperly thinks they are NOT duplicates, but they are in fact the same image (but apparently not the EXACT SAME bytes on disk!)
        2. According to the documentation, a photo is considered a duplicate "if it has the same, original filename; the same Exif capture date and time; and the same file size."
    This is my understanding too.
        3. Can you manually confirm that, for an example photo, that by examining the photo in Lightroom and the photo you are trying to synchronize/import, that these three items are identical?
    NO, I CAN'T!  The ORIGINAL file name (in the source folder) is the SAME as it was when I first imported that folder.  That name is used as part of the renaming process using a custom template. However, the file SIZES are different.    Here is the Finder Get Info for both files.  Initially, they appeared to be the same SIZE, 253KB, looking at the summary. But, if you look at the exact byte count, however, the file sizes are DIFFERENT: 252,632 for the original file and 2252,883 for the already-imported file:
    This difference alone is enough to indicate why LR does not consider the file a duplicate.
    Furthermore, there IS one small difference in the EXIF data regarding dates ... the DateTimeOriginal:
                                                                                                     CreateDate              DateTimeDigitized                    DateTimeOriginal              FileModifyDate                              ModifyDate
    ORIGINAL name: P5110178.JPG                                     2001:05:11 15:27:18    2001:05:11 15:27:18-07:00        2001:01:17 11:29:00        2011:01:17 11:29:00-07:00       2005:04:24 14:41:05  
    After LR rename:  KRJ_0002_010511_P5110178.JPG    2001:05:11 15:27:18    2001:05:11 15:27:18-07:00        2001:05:11 15:27:18        2011:01:17 11:29:02-07:00       2005:04:24 14:41:05
    So ... now I see TWO reasons why LR doesn't consider these duplicates.   Though the file NAME is the same (as original), the file sizes ARE slightly different.  The EXIF "DateTimeOriginal" is DIFFERENT.   Therefore, LR considers them NOT duplicates.
         4a. With regards to the screen captures of your images and operating system folder, I do not see that the filename is the same; I see the file names are different. Is that because you renamed the photos in Lightroom (either during import or afterwards)?
    I renamed the file on import using a custom template ...
            4b. Can you show a screen capture of this image that shows the original file name in the Lightroom metadata panel (it appears when the dropdown is set to EXIF and IPTC)?
    SO ....
    The METADATA shown by LR does NOT include the ORIGINAL file name (but I think I have seen it displayed for other files?).  The File SIZE in the LR metadata panel (246.96 KB) is different from what Finder reports (254 KB).  There are three "date" fields in the LR metadata, and five that I've extracted from the EXIF data.   I'm not sure which EXIF date corresponds to the "Data Time" shown in the LR metadata.
    I don't understand how these differences arose.   I did not touch the original file outside LR.   LR is the only program that touches the file it has copied to my external drive during import.  (though it was RECOVERED from a failed disk by Data Rescue 4),
    NOW ...
    I understand WHY LR considers the files different (but not how they came to be so).  The question now is WHAT DO I DO ABOUT IT?   Is there any tool I can use to adjust the original (or imported) file's SIZE and EXIF data to match the file LR has?  Any way to override or change how LR does duplicate detection?
    Thanks so very much, dj.   Any ideas on how to get LR to ignore these (minor) differences would be hugely helpful.

  • Error when creating Web Service Proxy

    Hi',
    I am creating a web service proxy to call a web service (OSB) "http://st-curriculum.oracle.com/obe/jdev/obe11jdev/ps1/webservices/ws.html#t5", I am making a client to call a OSB proxy and I am able to see the WSDL in the IE, however when I try to create a client using Jdev I get below error.
    Please advice me.
    Thanks
    Yatan
    oracle.jdeveloper.webservices.model.WebServiceException: Error creating model from wsdl "http://localhost:8001/xx/som/contracts/CustomerContract?wsdl": A class/interface with the same name "com.xx.gpsc.som.core.schema.somcommon.v1.TaskCompletionMessage" is already in use. Use a class customization to resolve this conflict.(Relevant to above error) another "TaskCompletionMessage" is generated from here.(Relevant to above error) another "SOMMessage" is generated from here.A class/interface with the same name "com.xx.gpsc.som.core.schema.somcommon.v1.SOMMessage" is already in use. Use a class customization to resolve this conflict.Two declarations cause a collision in the ObjectFactory class.(Related to above error) This is the other declaration.
         at oracle.jdeveloper.webservices.model.java.JavaWebService.createPortTypes(JavaWebService.java:1635)
         at oracle.jdeveloper.webservices.model.WebService.createServiceFromWSDL(WebService.java:2846)
         at oracle.jdeveloper.webservices.model.WebService.createServiceFromWSDL(WebService.java:2611)
         at oracle.jdeveloper.webservices.model.java.JavaWebService.<init>(JavaWebService.java:509)
         at oracle.jdeveloper.webservices.model.java.JavaWebService.<init>(JavaWebService.java:461)
         at oracle.jdeveloper.webservices.model.proxy.WebServiceProxy$ProxyJavaWebService.<init>(WebServiceProxy.java:2268)
         at oracle.jdeveloper.webservices.model.proxy.WebServiceProxy.updateServiceModel(WebServiceProxy.java:1701)
         at oracle.jdeveloper.webservices.model.proxy.WebServiceProxy.setDescription(WebServiceProxy.java:525)
         at oracle.jdevimpl.webservices.wizard.jaxrpc.proxy.ProxyJaxWsSpecifyWSDLPanel.setDescription(ProxyJaxWsSpecifyWSDLPanel.java:238)
         at oracle.jdevimpl.webservices.wizard.jaxrpc.common.SpecifyWsdlPanel.buildModel(SpecifyWsdlPanel.java:1109)
         at oracle.jdevimpl.webservices.wizard.jaxrpc.common.SpecifyWsdlPanel$5.run(SpecifyWsdlPanel.java:661)
         at oracle.ide.dialogs.ProgressBar.run(ProgressBar.java:655)
         at java.lang.Thread.run(Thread.java:619)
    Caused by: oracle.jdeveloper.webservices.tools.WsdlValidationException: Error creating model from wsdl "http://localhost:8001/xx/som/contracts/CustomerContract?wsdl": A class/interface with the same name "com.xx.gpsc.som.core.schema.somcommon.v1.TaskCompletionMessage" is already in use. Use a class customization to resolve this conflict.(Relevant to above error) another "TaskCompletionMessage" is generated from here.(Relevant to above error) another "SOMMessage" is generated from here.A class/interface with the same name "com.xx.gpsc.som.core.schema.somcommon.v1.SOMMessage" is already in use. Use a class customization to resolve this conflict.Two declarations cause a collision in the ObjectFactory class.(Related to above error) This is the other declaration.
         at oracle.jdevimpl.webservices.tools.wsa.WsaAdaptor.newWsdlValidationException(WsaAdaptor.java:825)
         at oracle.jdevimpl.webservices.tools.wsa.WsaAdaptor.getSeiInfo(WsaAdaptor.java:515)
         at oracle.jdeveloper.webservices.tools.WebServiceTools.getSeiInfo(WebServiceTools.java:523)
         at oracle.jdeveloper.webservices.model.java.JavaWebService.getSeiInfo(JavaWebService.java:1741)
         at oracle.jdeveloper.webservices.model.java.JavaWebService.createPortTypes(JavaWebService.java:1496)
         ... 12 more
    Caused by: oracle.j2ee.ws.common.tools.api.ValidationException: Error creating model from wsdl "http://localhost:8001/xx/som/contracts/CustomerContract?wsdl": A class/interface with the same name "com.xx.gpsc.som.core.schema.somcommon.v1.TaskCompletionMessage" is already in use. Use a class customization to resolve this conflict.(Relevant to above error) another "TaskCompletionMessage" is generated from here.(Relevant to above error) another "SOMMessage" is generated from here.A class/interface with the same name "com.xx.gpsc.som.core.schema.somcommon.v1.SOMMessage" is already in use. Use a class customization to resolve this conflict.Two declarations cause a collision in the ObjectFactory class.(Related to above error) This is the other declaration.
         at oracle.j2ee.ws.tools.wsa.jaxws.JaxwsWsdlToJavaTool.getJAXWSModel(JaxwsWsdlToJavaTool.java:664)
         at oracle.j2ee.ws.tools.wsa.WsdlToJavaTool.createJAXWSModel(WsdlToJavaTool.java:475)
         at oracle.j2ee.ws.tools.wsa.Util.getJaxWsSeiInfo(Util.java:1357)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:597)
         at oracle.jdevimpl.webservices.tools.wsa.Assembler$2$1.invoke(Assembler.java:218)
         at $Proxy50.getJaxWsSeiInfo(Unknown Source)
         at oracle.jdevimpl.webservices.tools.wsa.WsaAdaptor.getSeiInfo(WsaAdaptor.java:505)
         ... 15 more
    Caused by: oracle.j2ee.ws.common.tools.api.ValidationException: A class/interface with the same name "com.xx.gpsc.som.core.schema.somcommon.v1.TaskCompletionMessage" is already in use. Use a class customization to resolve this conflict.(Relevant to above error) another "TaskCompletionMessage" is generated from here.(Relevant to above error) another "SOMMessage" is generated from here.A class/interface with the same name "com.xx.gpsc.som.core.schema.somcommon.v1.SOMMessage" is already in use. Use a class customization to resolve this conflict.Two declarations cause a collision in the ObjectFactory class.(Related to above error) This is the other declaration.
         at oracle.j2ee.ws.tools.wsa.SchemaTool.genValueTypes(SchemaTool.java:188)
         at oracle.j2ee.ws.tools.wsa.jaxws.JaxwsWsdlToJavaTool.getJAXWSModel(JaxwsWsdlToJavaTool.java:647)
         ... 24 more
    Caused by: oracle.j2ee.ws.common.databinding.common.spi.DatabindingException: A class/interface with the same name "com.xx.gpsc.som.core.schema.somcommon.v1.TaskCompletionMessage" is already in use. Use a class customization to resolve this conflict.(Relevant to above error) another "TaskCompletionMessage" is generated from here.(Relevant to above error) another "SOMMessage" is generated from here.A class/interface with the same name "com.xx.gpsc.som.core.schema.somcommon.v1.SOMMessage" is already in use. Use a class customization to resolve this conflict.Two declarations cause a collision in the ObjectFactory class.(Related to above error) This is the other declaration.
         at oracle.j2ee.ws.common.tools.databinding.jaxb20.JAXB20TypeGenerator.generateJavaTypes(JAXB20TypeGenerator.java:120)
         at oracle.j2ee.ws.tools.wsa.SchemaTool.genValueTypes(SchemaTool.java:186)
         ... 25 more

    Hi Yatan
    The error is mostly there may be some Duplicate variable/schema element decalared in the wsdl or the xsd referred in the wsdl. Like in WSDL for any Operations, most of the times, we use input and outputs as complex xsd element. We declare these xsd in the same file or in another file and import that in the .wsdl file. So check or validate your XSD file for any Duplicates.
    In JDeveloper itself, I think, you can open XSD or WSDL and validate it from right click menu options like that.
    Thanks
    Ravi Jegga

  • Error when creating web service client in netbeans

    i tried to create a web service client from a wsdl and an error pops up:
    web service client can not be created by jaxws:wsimport utility.
    reason: com.sun.tools.xjc.api.schemacompiler.resetschema()v
    There might be a problem during java artifacts creation: for example a name conflict in generated classes.
    To detect the problem see also the error messages in output window.
    You may be able to fix the problem in WSDL Customization dialog
    (Edit Web Service Attributes action)
    or by manual editing of the local wsdl or schema files, using the JAXB customization
    (local wsdl and schema files are located in xml-resources directory).
    end of error message
    I am using netbeans 6.0 RC 2 and the bundled tomcat 6.0.13. Please help me.

    Hi Yatan
    The error is mostly there may be some Duplicate variable/schema element decalared in the wsdl or the xsd referred in the wsdl. Like in WSDL for any Operations, most of the times, we use input and outputs as complex xsd element. We declare these xsd in the same file or in another file and import that in the .wsdl file. So check or validate your XSD file for any Duplicates.
    In JDeveloper itself, I think, you can open XSD or WSDL and validate it from right click menu options like that.
    Thanks
    Ravi Jegga

  • Import Process Slow .. Suggestions for Speedup ?

    Hi,
    I am doing an import of data from a dump file since the last 15 hours. The size of the database is 15GB on a 10.2.0.3.
    The majority of the size is in one particular schema.
    The requirement is to duplicate the schema into a new tablespace. I created the structure off of the DDL from an index file and then loading data using :
    imp system/password buffer=125000000 file=............. log=....._rows.log fromuser=scott touser=alice rows=y ignore=y constraints=n indexes=n
    Some how since the last 12 or so hours it is trying to load data into a table consisting of 6 million rows, and still hasnt completed.
    I can see for a fact that the UNDO TS has crossed 6GB and the tablespace for the schema around 3GB.
    The redolog is 50MB each of 3 groups and there is constant Checkpoint incomplete as well.
    Now this is a test machine, but the machine where this is supposed to happen tomorrow is a 9.2.0.3 on a Solaris box.
    Is there any way to speed up this process .. maybe stop all the logging ? What are the other ways I can import all this data faster than the current unpredictably slow process.
    Thanks.

    If you are copying data within the same database, why not use CTAS or INSERT, with PARALLEL and APPEND ?
    eg
    ALTER SYSTEM SET PARALLEL_MAX_SERVERS=8;
    CREATE TABLE newschema.table_a AS SELECT * FROM oldschema.table_a where 1=2;
    ALTER TABLE newschema.table_a NOLOGGING;
    ALTER TABLE newschema.table_a PARALLEL 2;
    ALTER TABLE oldschema.table_b PARALLEL 2;
    ALTER SESSION ENABLE PARALLEL DML;
    INSERT /*+ APPEND  */ INTO newschema.table_a SELECT * FROM oldschema.table_a ;
    COMMIT;
    ALTER TABLE oldschema.table_a NOPARALLEL;
    CREATE INDEX newschema.table_a_ndx_1 ON newschema.table_a(col1,col2) PARALLEL 4 NOLOGGING;
    ALTER TABLE newschema.table_a NOPARALLEL;
    ALTER INDEX newschema.table_a_ndx_1 NOPARALLEL;and run multiple tables in parallel (table_a being in the above block of code, table_b being in another block of code running concurrently, those each block of code using 4 ParallelQuery operators.
    Hemant K Chitale
    http://hemantoracledba.blogspot.com

  • Sequences on more shemas. How to use???

    A big trouble using correctly sequences in oracle lite.
    Since now i had only one olite publication linked to a schema on backend db, with many objects: tables, indexes, views and 5 sequences.
    Example:
    publication --> schema
    schema.seq1, schema.seq2, ... , schema.seq5
    reachable by olite client as:
    db = $USER_PUBLICATION
    Now i have to "duplicate" this schema and deploy another publication, adding a prefix.
    Example:
    new_publication -> new_schema
    db = $USER_NEW_PUBLICATION
    I found no problem for all objects but sequences,
    because tables, index, etc had the right schema as "owner",
    but i did not found any parameters like owner for sequences.
    Explaining better,
    i need to use the same sequence name: seq1, seq2, ..., seq5 for both publications,
    referring to the right sequences on the right schema, but i'm not sure if (and how) this is possible.
    Could you help me?
    Thanks.
    Daniele

    we had a similar issue when we changed our application from 'standard' oracle lite (complete and fast refresh PIs) to use queue based PIs.
    We did a workaound for the problem by
    1) do not include the sequences at all in the second application (you get erors if you do)
    2) run the following (fusionbci2 is our new application, fusionbci is the old, and fusionbci2 is effectively a clone of fusionbci)
    -- fusionbci2_sequences.sql
    -- associate all sequences with the fusionbci2 application
    DECLARE
    l_fusionbci2 VARCHAR2(30);
    -- this cursor get the internal publication id for the fusionbci2 application
    CURSOR c_fusionbci2 IS
    SELECT name
    FROM cv$all_publications
    WHERE name_template LIKE 'fusionbci2%';
    BEGIN
    OPEN c_fusionbci2;
    FETCH c_fusionbci2 INTO l_fusionbci2;
    CLOSE c_fusionbci2;
    -- this insert creates copies of the sequence relationship entries for the fusionbci application
    -- linked to fusionbci2
    INSERT INTO c$pub_objects
    (pub_name
    ,db_name
    ,obj_id
    ,obj_type
    ,obj_weight)
    SELECT l_fusionbci2
    ,'fusionbci2'
    ,obj_id
    ,obj_type
    ,obj_weight
    FROM c$pub_objects
    WHERE db_name='fusionbci'
    AND obj_type='NA_SEQ';
    END;
    NOTE this works, but is probably not recommended by oracle, and should only be used where individual clients will only use one of the applications at a time (if they use both the sequence ranges are likely to overlap)

  • Impdp ORA-31684: Object type USER:"USERNAME" already exists

    Hi,
    I use expdp/impdp to duplicate one schema in a database like this:
    expdp system/manager@DB SCHEMAS=SCHEMANAME
    then I drop destination Schema like this
    drop user SCHEMANAME_NEW cascade;
    create user SCHEMANAME_NEW identified by PW default tablespace TABLESPACENAME;
    and impdp like this
    impdp system/manager@DB FULL=Y REMAP_SCHEMA=SCHEMANAME:SCHEMANAME_NEW
    and I get this error:
    Processing object type SCHEMA_EXPORT/USER
    ORA-31684: Object type USER:"SCHEMANAME_NEW" already exists
    I know that the import was successful, but this error breaks my hudson build.
    I tried to add exclude like this
    impdp system/manager@DB FULL=Y REMAP_SCHEMA=SCHEMANAME:SCHEMANAME_NEW exclude=USER:\"=\'SCHEMANAME_NEW\'\"
    I need to get rid of this error.
    Thx

    You get this error because you precreated the user. All you need to do is add
    exclude=user
    to either your expdp or impdp command. Or let impdp create the user for you. If you need it to have a different tablespace you can just use the
    remap_tablespace=old_tbs:new_tbs
    This should take care of the default tablespace on the create user command.
    Dean

  • Duplicate data in front scheme and back scheme?

    Assume that a near cache has been defined with a front-scheme and a back-scheme. The front-scheme is a local-scheme and the back-scheme is a distributed-scheme.
    Now assume that there is a piece of data for which the current JVM is a master. That is, the data resides in the back cache.
    Assume that the JVM accesses that data.
    Will this populate the front cache even though the master is in the back cache on the same JVM?
    Does Coherence therefore store two copies of the data, one in the front cache, one in the back cache?
    Or does Coherence realize that the requested data is in the back cache on the same JVM, and therefore doesn't make another copy and populate the front cache, but instead simply redirects the get call to it's own back cache?
    This basically has memory implications and I'm trying to configure a cache that works on a single JVM as well as a cluster. I am trying to get away with having just the one configuration file for coherence irrespective of whether the application is on a single storage enabled node, or a cluster.
    I cannot tell from the diagram in the Developer Guide at http://docs.oracle.com/cd/E24290_01/coh.371/e22837/cache_intro.htm#BABCJFHE
    because I'm not sure whether the front/local cache for JVM1 has not been shown as populated with A because it is the master for A, or because JVM 1 never accessed A.

    926349 wrote:
    Assume that a near cache has been defined with a front-scheme and a back-scheme. The front-scheme is a local-scheme and the back-scheme is a distributed-scheme.
    Now assume that there is a piece of data for which the current JVM is a master. That is, the data resides in the back cache.
    Assume that the JVM accesses that data.
    Will this populate the front cache even though the master is in the back cache on the same JVM?
    Yes, it will, if access is via the near cache.
    Does Coherence therefore store two copies of the data, one in the front cache, one in the back cache?
    Yes, but with usual usage patterns the front-map is size limited, so it will hold the just pk-accessed entry in the front map, but it may evict other entries from the front map, so the front-map may not be a full duplicate of the entire data-set.
    Or does Coherence realize that the requested data is in the back cache on the same JVM, and therefore doesn't make another copy and populate the front cache, but instead simply redirects the get call to it's own back cache?
    Nope, no such logic is in place. On the other hand, as I mentioned front-map is usually size-limited, and also the storage nodes don't have to access the cache via the front-map, they can go directly to the distributed cache by getting the back-cache from the near cache instance.
    This basically has memory implications and I'm trying to configure a cache that works on a single JVM as well as a cluster. I am trying to get away with having just the one configuration file for coherence irrespective of whether the application is on a single storage enabled node, or a cluster.
    You should not try to. You should always choose the correct topology and configuration for the actual use case. There is no one-size-optimally-fits-all solution.
    Best regards,
    Robert

  • Duplicate includes in schema document

    Hi,
    We are using 9i xdk and java stored procedures.
    We have a scenario where a schema is composed of multiple documents.
    Document A defines common basic types and is included in documents B, C, D, and E.
    One or more of these documents are included in top level schema documents (say, F,G,etc).
    The schema builder is complaining about duplicate includes because document A is getting included multiple times when we include , say, B and C in a top level document.
    Is there any way we can make XSDBuilder to ignore the duplicate includes?

    Hi,
    I create a quotation on 2009.07.28. Then today I modify the items. After I saved this quotation and check the document flow, the quotation display twice like that:
    Quotation 0020000048 2009.07.28 open
    Quotation 0020000048 2009.07.31 open
    It's so strange.
    Any suggestion is welcomed.
    Thanks.
    Best regards,
    chris Gu

  • Duplicate schema definitions

    I have two different Web service data sources that I'm trying to use within a project that both define the same element (with the same name and namespace!) but define it differently. Since changing the Web services to use a different namespaces isn't an option (I don't have any control over these Web services, this is a constraint I have to live with, unfortunately) I'm trying to figure out how to integrate data from both data sources without complaints about duplicate definitions for the same element.
    To simplify things (I think) I created two simple .xsd files , SchemaA and SchemaB both which define the same complex element PersonName in the same namespace but with different sub-elements. SchemaA has PersonName defined with FirstName and LastName subelements. SchemaB defines it with a PersonFullName subelement.
    I then create an Application in aqualogic and create three Data Services projects within it, ProjectA and ProjectB and ProjectAB and import SchemaA into ProjectA and SchemaB into ProjectB.
    In ProjectA, I create a data service ServiceA with a function that returns an instance of the complex type from SchemaA
    In ProjectB, I create a data service ServiceB with a function that returns an instance of the complex type from SchemaB.
    In ProjectAB I create a data service ServiceAB and configure and select "Create XML Type" to configure it to return a new XML type (in a separate namespace than the one used for SchemaA and SchemaB) I add a function to return this new XML type. In the XQuery Editor View I drag the function from ServiceA and attempt to drag the function from ServiceB but get an error indicating that "PersonName has already been defined" No surprise really.
    It makes sense that I can't have two different definitions for the same type within a single XQuery. So, I create a new data service newServiceA in ProjectA This data service uses ServiceA as input and returns a new XML type in a different namespace than the one used by SchemaA.
    I now try to create myServiceAB again. I use newServiceA as an input and ServiceB as an input. In theory, since the output of newServiceA and ServiceB are in different namespaces there shouldn't be a conflict right? Well there is. I don't get an exception at design time but get one when I execute it in the Test View. The exception complains that my type has already been imported.
    So now I think that maybe I need to isolate SchemaA and SchemaB into two separate applications(EAR). I create a ApplicationA and ApplicationB, and ApplicationAB.
    In ApplicationA I create ServiceA and newServiceA as described above (so newServiceA returns XML that conforms to a schema with its own unique namespace)
    I do the same thing in ApplicationB
    In ApplicationAB I want to create a data service that combines the output from newServiceA and newServiceB. But it doesn't seem that a data service can reference a data service in another application/EAR.
    How do I proceed? Has anyone run into anything like this?

    I ran into a similar problem that led me to this topic after searching for an answer... I'm not sure it's exactly the same but I figured I'd post here anyways since it seems very similar.
    My co-worker has defined a set of subsets to the gjxdm schema representing person, activity, document... etc. .
    These are each being used to create a logical data service which I'm then mapping some data to. The problem I'm having is that ALDSP is complaining about duplicates when I build the project.
    The way I have it set up is one LogicalView folder under the project with a schema subfolder and several folders under that representing each version of the jxdm subset corresponding to a data service (i.e. person, activity, etc...). LogicalView also has a .ds for each of these schema subsets. Note that there is NO relation between these logical data services which is why I'm confused there is a conflict...
    Here are the errors I'm getting:
    ERROR: LogicalView/jxdmPerson.ds
    ERROR: jxdmPerson.ds:10:: ld:DataServices/LogicalView/jxdmPerson.ds, line 10, column 8: {err}XQ0035: "Type [QName {http://www.it.ojp.gov/jxdm/3.0.3/proxy/xsd/1.0}time] has been already defined!": name previously imported
    ERROR: LogicalView/jxdmActivity.ds
    ERROR: jxdmActivity.ds:10:: ld:DataServices/LogicalView/jxdmActivity.ds, line 10, column 8: {err}XQ0035: "Type [QName {http://www.it.ojp.gov/jxdm/3.0.3}DocumentControlMetadataType] has been already defined!": name previously imported
    ERROR: LogicalView/jxdmLocation.ds
    ERROR: jxdmLocation.ds:10:: ld:DataServices/LogicalView/jxdmLocation.ds, line 10, column 8: {err}XQ0035: "Type [QName {http://www.it.ojp.gov/jxdm/3.0.3}TextType] has been already defined!": name previously imported
    I'm assuming this has something to do with the fact that some of the elements of the jxdm schema are defined in multiple subsets that I'm using independently for each logical data service but I'm looking for more details on how this works.
    Thanks a lot,
    Kevin
    EDIT: Oh one other interesting detail is that I can run a test on the Person datasource in "test view" (which I have a few basic mappings completed on) and it works - returns the results I expect.
    Message was edited by:
    kreg77

  • Moving Subpartitions to a duplicate table in a different schema.

    +NOTE: I asked this question on the PL/SQL and SQL forum, but have moved it here as I think it's more appropriate to this forum. I've placed a pointer to this post on the original post.+
    Hello Ladies and Gentlemen.
    We're currently involved in an exercise at my workplace where we are in the process of attempting to logically organise our data by global region. For information, our production database is currently at version 10.2.0.3 and will shortly be upgraded to 10.2.0.5.
    At the moment, all our data 'lives' in the same schema. We are in the process of producing a proof of concept to migrate this data to identically structured (and named) tables in separate database schemas; each schema to represent a global region.
    In our current schema, our data is range-partitioned on date, and then list-partitioned on a column named OFFICE. I want to move the OFFICE subpartitions from one schema into an identically named and structured table in a new schema. The tablespace will remain the same for both identically-named tables across both schemas.
    Do any of you have an opinion on the best way to do this? Ideally in the new schema, I'd like to create each new table as an empty table with the appropriate range and list partitions defined. I have been doing some testing in our development environment with the EXCHANGE PARTITION statement, but this requires the destination table to be non-partitioned.
    I just wondered if, for partition migration across schemas with the table name and tablespace remaining constant, there is an official "best practice" method of accomplishing such a subpartition move neatly, quickly and elegantly?
    Any helpful replies welcome.
    Cheers.
    James

    You CAN exchange a subpartition into another table using a "temporary" (staging) table as an intermediary.
    See :
    SQL> drop table part_subpart purge;
    Table dropped.
    SQL> drop table NEW_part_subpart purge;
    Table dropped.
    SQL> drop table STG_part_subpart purge;
    Table dropped.
    SQL>
    SQL> create table part_subpart(col_1  number not null, col_2 varchar2(30))
      2  partition by range (col_1) subpartition by list (col_2)
      3  (
      4  partition p_1 values less than (10) (subpartition p_1_s_1 values ('A'), subpartition p_1_s_2 values ('B'), subpartition p_1_s_3 values ('C'))
      5  ,
      6  partition p_2 values less than (20) (subpartition p_2_s_1 values ('A'), subpartition p_2_s_2 values ('B'), subpartition p_2_s_3 values ('C'))
      7  )
      8  /
    Table created.
    SQL>
    SQL> create index part_subpart_ndx on part_subpart(col_1) local;
    Index created.
    SQL>
    SQL>
    SQL> insert into part_subpart values (1,'A');
    1 row created.
    SQL> insert into part_subpart values (2,'A');
    1 row created.
    SQL> insert into part_subpart values (2,'B');
    1 row created.
    SQL> insert into part_subpart values (2,'B');
    1 row created.
    SQL> insert into part_subpart values (2,'C');
    1 row created.
    SQL> insert into part_subpart values (11,'A');
    1 row created.
    SQL> insert into part_subpart values (11,'C');
    1 row created.
    SQL>
    SQL> commit;
    Commit complete.
    SQL>
    SQL> create table NEW_part_subpart(col_1  number not null, col_2 varchar2(30))
      2  partition by range (col_1) subpartition by list (col_2)
      3  (
      4  partition n_p_1 values less than (10) (subpartition n_p_1_s_1 values ('A'), subpartition n_p_1_s_2 values ('B'), subpartition n_p_1_s_3 values ('C'))
      5  ,
      6  partition n_p_2 values less than (20) (subpartition n_p_2_s_1 values ('A'), subpartition n_p_2_s_2 values ('B'), subpartition n_p_2_s_3 values ('C'))
      7  )
      8  /
    Table created.
    SQL>
    SQL> create table STG_part_subpart(col_1  number not null, col_2 varchar2(30))
      2  /
    Table created.
    SQL>
    SQL> -- ensure that the Staging table is empty
    SQL> truncate table STG_part_subpart;
    Table truncated.
    SQL> -- exchanging a subpart out of part_subpart
    SQL> alter table part_subpart exchange subpartition
      2  p_2_s_1 with table STG_part_subpart;
    Table altered.
    SQL> -- exchanging the subpart into NEW_part_subpart
    SQL> alter table NEW_part_subpart exchange subpartition
      2  n_p_2_s_1 with table STG_part_subpart;
    Table altered.
    SQL>
    SQL>
    SQL> select * from NEW_part_subpart subpartition (n_p_2_s_1);
         COL_1 COL_2
            11 A
    SQL>
    SQL> select * from part_subpart subpartition (p_2_s_1);
    no rows selected
    SQL>I have exchanged subpartition p_2_s_1 out of the table part_subpart into the table NEW_part_subpart -- even with a different name for the subpartition (n_p_2_s_1) if so desired.
    NOTE : Since your source and target tables are in different schemas, you will have to move (or copy) the staging table STG_part_subpart from the first schema to the second schema after the first "exchange subpartition" is done. You will have to do this for every subpartition to be exchanged.
    Hemant K Chitale
    Edited by: Hemant K Chitale on Apr 4, 2011 10:19 AM
    Added clarification for cross-schema exchange.

Maybe you are looking for