Duplicate huge schema
Hi all,
I have a schema that contains 2000 GB of data.
What is the best practice to duplicate the schema?
Thanks
dyahav
Hi all,
Thanks for the quick responses and great links.
I have to copy one schema of 2000 GB in the same machine and I can do it over a night (which means that it can be issued offline)
Actually, I have no experience in such schema size and I have to estimate the time required to do that and to find the best technique.
1. Do you know how to estimate the time required?
2. Is there any other technique to copy a schema?
3. If I have to copy the schema to other machine (identical HW and SW), is there other technique?
Thanks!
dyahav
Similar Messages
-
A co-worker is looking to duplicate our schema in XE. By duplicate, i mean a copy of the TABLEs, CONSTRAINTs, INDEXes, VIEWs, and SEQUENCEs, but not the data. This should only need to be done once.
From:Oracle9i Enterprise Edition Release 9.2.0.8.0
To: Oracle Database 10g Express Edition Release 10.2.0.1.0
We do not have all rights in the 9i DB.
How do i go about copying the schema?Hi,
>>The command i used was (from Windows): exp user/pass@tnsname TABLES=(user.tablename) ROWS=N LOG=c:\a\log.txt FILE=c:\a\exp.txt
Keep in mind that you are using the TABLE clause. In this case, only tables will be exported. So, views, functions, sequences ... won't be exportted. Therefore, I advise you to perform the export below:
exp user/password file=user.dmp rows=n
>>EXP-00009: no privilege to export <owner>'s table <table>
What user are you using to take the export?
C:\>exp a/a file=test tables=b.emp
Export: Release 10.1.0.2.0 - Production on Fri Apr 11 15:23:17 2008
Copyright (c) 1982, 2004, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
server uses WE8ISO8859P1 character set (possible charset conversion)
About to export specified tables via Conventional Path ...
EXP-00009: no privilege to export A's table EMP
Export terminated successfully with warnings.
C:\>exp system/password file=test tables=b.emp
Export: Release 10.1.0.2.0 - Production on Fri Apr 11 15:23:36 2008
Copyright (c) 1982, 2004, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
server uses WE8ISO8859P1 character set (possible charset conversion)
About to export specified tables via Conventional Path ...
Current user changed to B
. . exporting table EMP 88 rows exported
Export terminated successfully without warnings.
C:\>exp a/a file=test tables=emp
Export: Release 10.1.0.2.0 - Production on Fri Apr 11 15:23:36 2008
Copyright (c) 1982, 2004, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
server uses WE8ISO8859P1 character set (possible charset conversion)
About to export specified tables via Conventional Path ...
. . exporting table EMP 88 rows exported
Export terminated successfully without warnings.Cheers
Legatti -
Splitting huge Schema/E-R Model into Modular Schema/Multiple E-R Models
Hi,
Currently we have huge DB Schema (1500+ Tables) using 1 single OWNER ID.
Based on Domain Partitioning,we intend to create 6 new OWNER IDs (based on individual Module/Domain)
and re-assign the appropriate Tables into the respective Domain.
basically we expect 1500 Tables under one OWNER ID would now be split
into 250 Tables per OWNER ID with a total of 6 new OWNER IDs being created.
We also will need to create 6 new E-R Models ,1 per OWNER ID
Trying to find out what could be the best possible way to do the "splitting" of a
linear huge Data Model, into 6 modular E-R models. Going forward, we would like
to maintain individual Models on per Domain basis rather than the existing monolithic Model.
Any suggestions or tips on achieving this using SQL Developer Data Modeler would be greatly appreciated.
What is the best and clean way of achieving this
Thanks
AuroHi Auro,
first you should be aware that you can have foreign keys created only between tables in one model.
If this is not restriction for you then you can proceed:
1) create one subview per future model and put in each subview tables that will constitute the model
2) each model in separate design
- use "Export>To Data modeler Design" - select subview that you want to export and only objects belonging to that subview will be exported
Philip -
Hi,
We have a requirement to move a huge schema from one database to another using RMAN. (similar to export and import)
Can someone please provide the steps/scripts or link where i can get this details.
Thanks!bLaK wrote:
Hi,
We have a requirement to move a huge schema from one database to another using RMAN. (similar to export and import)
Can someone please provide the steps/scripts or link where i can get this details.
Thanks!RMAN wont handle backup or restore of logical objects.
Use expdp/impdp to move schema.
Using RMAN you can restore/duplicate of entire database. -
Schema Export using DBMS_DATAPUMP is extremely slow
Hi,
I created a procedure that duplicates a schema within a given database by first exporting the schema to a dump file using DBMS_DATAPUMP and then imports the same file (can't use network link because it fails most of the time).
My problem is that a regular schema datapump export takes about 1.5 minutes whereas the export using dbms_datapump takes about 10 times longer - something in the range of 14 minutes.
here is the code of the procedure that duplicates the schema:
CREATE OR REPLACE PROCEDURE MOR_DBA.copy_schema3 (
source_schema in varchar2,
destination_schema in varchar2,
include_data in number default 0,
new_password in varchar2 default null,
new_tablespace in varchar2 default null
) as
h number;
js varchar2(9); -- COMPLETED or STOPPED
q varchar2(1) := chr(39);
v_old_tablespace varchar2(30);
v_table_name varchar2(30);
BEGIN
/* open a new schema level export job */
h := dbms_datapump.open ('EXPORT', 'SCHEMA');
/* attach a file to the operation */
DBMS_DATAPUMP.ADD_FILE (h, 'COPY_SCHEMA_EXP' ||copy_schema_unique_counter.NEXTVAL || '.DMP', 'LOCAL_DATAPUMP_DIR');
/* restrict to the schema we want to copy */
dbms_datapump.metadata_filter (h, 'SCHEMA_LIST',q||source_schema||q);
/* apply the data filter if we don't want to copy the data */
IF include_data = 0 THEN
dbms_datapump.data_filter(h,'INCLUDE_ROWS',0);
END IF;
/* start the job */
dbms_datapump.start_job(h);
/* wait for the job to finish */
dbms_datapump.wait_for_job(h, js);
/* detach the job handle and free the resources */
dbms_datapump.detach(h);
/* open a new schema level import job */
h := dbms_datapump.open ('IMPORT', 'SCHEMA');
/* attach a file to the operation */
DBMS_DATAPUMP.ADD_FILE (h, 'COPY_SCHEMA_EXP' ||copy_schema_unique_counter.CURRVAL || '.DMP', 'LOCAL_DATAPUMP_DIR');
/* restrict to the schema we want to copy */
dbms_datapump.metadata_filter (h, 'SCHEMA_LIST',q||source_schema||q);
/* remap the importing schema name to the schema we want to create */
dbms_datapump.metadata_remap(h,'REMAP_SCHEMA',source_schema,destination_schema);
/* remap the tablespace if needed */
IF new_tablespace IS NOT NULL THEN
select default_tablespace
into v_old_tablespace
from dba_users
where username=source_schema;
dbms_datapump.metadata_remap(h,'REMAP_TABLESPACE', v_old_tablespace, new_tablespace);
END IF;
/* apply the data filter if we don't want to copy the data */
IF include_data = 0 THEN
dbms_datapump.data_filter(h,'INCLUDE_ROWS',0);
END IF;
/* start the job */
dbms_datapump.start_job(h);
/* wait for the job to finish */
dbms_datapump.wait_for_job(h, js);
/* detach the job handle and free the resources */
dbms_datapump.detach(h);
/* change the password as the new user has the same password hash as the old user,
which means the new user can't login! */
execute immediate 'alter user '||destination_schema||' identified by '||NVL(new_password, destination_schema);
/* finally, remove the dump file */
utl_file.fremove('LOCAL_DATAPUMP_DIR','COPY_SCHEMA_EXP' ||copy_schema_unique_counter.CURRVAL|| '.DMP');
/*EXCEPTION
WHEN OTHERS THEN --CLEAN UP IF SOMETHING GOES WRONG
SELECT t.table_name
INTO v_table_name
FROM user_tables t, user_datapump_jobs j
WHERE t.table_name=j.job_name
AND j.state='NOT RUNNING';
execute immediate 'DROP TABLE ' || v_table_name || ' PURGE';
RAISE;*/
end copy_schema3;
/The import part of the procedure takes about 2 minutes which is the same time a regular dp import takes on the same schema.
If I disable the import completely it (the export) still takes about 14 minutes.
Does anyone know why the export using dbms_datapump takes so long for exporting?
thanks.Hi,
I did a tkprof on the DM trace file and this is what I found:
Trace file: D:\Oracle\diag\rdbms\instanceid\instanceid\trace\instanceid_dm00_8004.trc
Sort options: prsela execpu fchela
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
SQL ID: bjf05cwcj5s6p
Plan Hash: 0
BEGIN :1 := sys.kupc$que_int.receive(:2); END;
call count cpu elapsed disk query current rows
Parse 3 0.00 0.00 0 0 0 0
Execute 229 1.26 939.00 10 2445 0 66
Fetch 0 0.00 0.00 0 0 0 0
total 232 1.26 939.00 10 2445 0 66
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: SYS (recursive depth: 2)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
wait for unread message on broadcast channel
949 1.01 936.39
********************************************************************************what does "wait for unread message on broadcast channel" mean and why did it take 939 seconds (more than 15 minutes) ? -
Hi Every one,
just installed the oracle 11g enterprise version on my laptop
all is good and successfully connected the data base in sqldeveloper. but i am not able to see the HR schema all i see is something different.
can any one help me regarding this issue?
Thank
Any info much appreciated
cheersPl do not post duplicates - Sample Schema download
Continue the discussion in your original thread -
100% Complete Schema Duplication
100% Complete Schema Duplication
10gR2
Need to duplicate a schema 100%, including Pkg, Types, all objects generated
by XDB etc.
Does expdp/impdp do that ?
(or what does expdp/impdp skip, if any ?)
thanksSchema level export should do it. It gets all of them owned by that user/schema.
Thanks
chandra -
For reasons explained below, I want to try to re-import all my images into LR and hope that none/few are in fact considered new and are imported. Yet, for some folders, LR is apparently unable to detect that my source images are already in the catalog, and are on disk, and that the source file meta data matches what LR knows about the images. When I click an image in LR and Show in Finder, I do see the imported image on disk. I can edit the image in the Develop module. So, it seems good, but all is not well. Sorry for the long post here, but I wanted to provide as much info as I could, as I am really seeking your help, which I'd very much appreciate.
Here are some screen shots that illustrate the problem:
Finder contents of the original images
LR folder hierarchy
an image as seen in LR
Finder content of external LR copy of images
import showing 10 "new" photos
The original images ... (I'm not sure why the file date is April 2001 but the actual image date is January 2011; I may have just used the wrong date on the folder name?)
The LR folder hierarchy ...
An image as seen in LR ...
The external folder containing the images in the LR library
But on import of the original source folder, LR sees 10 "new" photos ...
I tried "Synchronize Folder ..." on this particular folder, and it simply hangs half-way through as seen in the screen shot below. IS THIS AN LR BUG? This is really odd, since "Synchronize Folder ..." on the top-level folder completes quickly.
I have a spreadsheet of of the EXIF data for the original files and those created by LR. (I extracted this info using the excellent and free pyExifToolGui graphical frontend for the command line tool ExifTool by Phil Harvey.) Almost all of the Exif data is the same, but LR has added some additional info to the files after import, including (of course) keywords. However, I would not have expected the differences I found to enter into the duplicate detection scheme. (I didn't see a way to attach the spreadsheet to this posting as it's not an "image".)
I'm running LR 5.7 on a 27" iMac with Yosemite 10.10.2, having used LR since LR2. I have all my original images (.JPEGs and RAWs of various flavors) on my internal drive on the Mac. To me this is like saving all my memory cards and never re-using them. Fortunately, these files are backed up several ways. I import these images (copying RAWs as DNG) into LR with a renaming scheme that includes the import number, original file creation date and original file name. There should be one LR folder for each original source file folder, with the identical folder name (usually a place and date). I store the LR catalog and imported images on an external drive. Amazingly and unfortunately my external drive failed as did it's twin, same make/size drive that I used as a backup with Carbon Copy Cloner. I used Data Rescue 4 to recover to a new disk what I thought was almost all of the files on the external drive.
So, I thought all would be well, but, when I tried "Synchronize Folder" using the top-level folder of my catalog, the dialog box appeared saying there were over 1000 "New" photos that had not been imported. This made be suspicious that I had failed to recover everything. But actually things are much worse than I thought.. I have these counts of images:
80,0061 files in 217 folders for my original source files (some of these may be (temporary?) copies that I actually don't want to import into LR)
51,780 files in 187 folders on my external drive containing the LR photo library
49,254 images in the top-level folder in the LR catalog (why different from the external file count?)
35,332 images found during import of the top-level folder containing original images
22,560 images found as "new" by LR during import
1,074 "new" images reported by Synchronize Folder ... on the top-level folder in the catalog; different from import count
Clearly things are badly out of sync. I'd like to be sure I have all my images in LR, but none duplicated. Thus, I want to try to import the entire library and have LR tell me which photos are new. I have over 200 folders in LR. I am now proceeding to try importing each folder, one at a time, to try to reconcile the differences and import the truly missing images. This will be painful. And it may not be enough to fully resolve the above discrepancies.
Does anyone have any ideas or suggestions? I'd really appreciate your help!
KenThanks for being on the case, dj! As you'll see below, YOU WERE RIGHT! But I am confused.
1. Does the same problem exist if you try to import (not synchronize) from that folder? In other words, does import improperly think these are not duplic
YES. Import improperly thinks they are NOT duplicates, but they are in fact the same image (but apparently not the EXACT SAME bytes on disk!)
2. According to the documentation, a photo is considered a duplicate "if it has the same, original filename; the same Exif capture date and time; and the same file size."
This is my understanding too.
3. Can you manually confirm that, for an example photo, that by examining the photo in Lightroom and the photo you are trying to synchronize/import, that these three items are identical?
NO, I CAN'T! The ORIGINAL file name (in the source folder) is the SAME as it was when I first imported that folder. That name is used as part of the renaming process using a custom template. However, the file SIZES are different. Here is the Finder Get Info for both files. Initially, they appeared to be the same SIZE, 253KB, looking at the summary. But, if you look at the exact byte count, however, the file sizes are DIFFERENT: 252,632 for the original file and 2252,883 for the already-imported file:
This difference alone is enough to indicate why LR does not consider the file a duplicate.
Furthermore, there IS one small difference in the EXIF data regarding dates ... the DateTimeOriginal:
CreateDate DateTimeDigitized DateTimeOriginal FileModifyDate ModifyDate
ORIGINAL name: P5110178.JPG 2001:05:11 15:27:18 2001:05:11 15:27:18-07:00 2001:01:17 11:29:00 2011:01:17 11:29:00-07:00 2005:04:24 14:41:05
After LR rename: KRJ_0002_010511_P5110178.JPG 2001:05:11 15:27:18 2001:05:11 15:27:18-07:00 2001:05:11 15:27:18 2011:01:17 11:29:02-07:00 2005:04:24 14:41:05
So ... now I see TWO reasons why LR doesn't consider these duplicates. Though the file NAME is the same (as original), the file sizes ARE slightly different. The EXIF "DateTimeOriginal" is DIFFERENT. Therefore, LR considers them NOT duplicates.
4a. With regards to the screen captures of your images and operating system folder, I do not see that the filename is the same; I see the file names are different. Is that because you renamed the photos in Lightroom (either during import or afterwards)?
I renamed the file on import using a custom template ...
4b. Can you show a screen capture of this image that shows the original file name in the Lightroom metadata panel (it appears when the dropdown is set to EXIF and IPTC)?
SO ....
The METADATA shown by LR does NOT include the ORIGINAL file name (but I think I have seen it displayed for other files?). The File SIZE in the LR metadata panel (246.96 KB) is different from what Finder reports (254 KB). There are three "date" fields in the LR metadata, and five that I've extracted from the EXIF data. I'm not sure which EXIF date corresponds to the "Data Time" shown in the LR metadata.
I don't understand how these differences arose. I did not touch the original file outside LR. LR is the only program that touches the file it has copied to my external drive during import. (though it was RECOVERED from a failed disk by Data Rescue 4),
NOW ...
I understand WHY LR considers the files different (but not how they came to be so). The question now is WHAT DO I DO ABOUT IT? Is there any tool I can use to adjust the original (or imported) file's SIZE and EXIF data to match the file LR has? Any way to override or change how LR does duplicate detection?
Thanks so very much, dj. Any ideas on how to get LR to ignore these (minor) differences would be hugely helpful. -
Hi,
Could anybody tell me if there is possible to export a huge table (900 mil records) to an external file and then create an external table based on this external file ?
Does anybody know the best way to export/ import a huge schema to save time and disk space ?
And all these in Oracle 9i.
Thanks,
PaulHi Paul,
export a huge table (900 mil records) to an external file and then create an external table based on this external file ?Yes! You can use the SQL*Plus "spool" command, and extract the data in a comma-delimited format (CSV). Then, you can define an external table to point to the flat file. I have some notes nere:
http://www.dba-oracle.com/art_ext_tabs_spreadsheet.htm -
Error when creating Web Service Proxy
Hi',
I am creating a web service proxy to call a web service (OSB) "http://st-curriculum.oracle.com/obe/jdev/obe11jdev/ps1/webservices/ws.html#t5", I am making a client to call a OSB proxy and I am able to see the WSDL in the IE, however when I try to create a client using Jdev I get below error.
Please advice me.
Thanks
Yatan
oracle.jdeveloper.webservices.model.WebServiceException: Error creating model from wsdl "http://localhost:8001/xx/som/contracts/CustomerContract?wsdl": A class/interface with the same name "com.xx.gpsc.som.core.schema.somcommon.v1.TaskCompletionMessage" is already in use. Use a class customization to resolve this conflict.(Relevant to above error) another "TaskCompletionMessage" is generated from here.(Relevant to above error) another "SOMMessage" is generated from here.A class/interface with the same name "com.xx.gpsc.som.core.schema.somcommon.v1.SOMMessage" is already in use. Use a class customization to resolve this conflict.Two declarations cause a collision in the ObjectFactory class.(Related to above error) This is the other declaration.
at oracle.jdeveloper.webservices.model.java.JavaWebService.createPortTypes(JavaWebService.java:1635)
at oracle.jdeveloper.webservices.model.WebService.createServiceFromWSDL(WebService.java:2846)
at oracle.jdeveloper.webservices.model.WebService.createServiceFromWSDL(WebService.java:2611)
at oracle.jdeveloper.webservices.model.java.JavaWebService.<init>(JavaWebService.java:509)
at oracle.jdeveloper.webservices.model.java.JavaWebService.<init>(JavaWebService.java:461)
at oracle.jdeveloper.webservices.model.proxy.WebServiceProxy$ProxyJavaWebService.<init>(WebServiceProxy.java:2268)
at oracle.jdeveloper.webservices.model.proxy.WebServiceProxy.updateServiceModel(WebServiceProxy.java:1701)
at oracle.jdeveloper.webservices.model.proxy.WebServiceProxy.setDescription(WebServiceProxy.java:525)
at oracle.jdevimpl.webservices.wizard.jaxrpc.proxy.ProxyJaxWsSpecifyWSDLPanel.setDescription(ProxyJaxWsSpecifyWSDLPanel.java:238)
at oracle.jdevimpl.webservices.wizard.jaxrpc.common.SpecifyWsdlPanel.buildModel(SpecifyWsdlPanel.java:1109)
at oracle.jdevimpl.webservices.wizard.jaxrpc.common.SpecifyWsdlPanel$5.run(SpecifyWsdlPanel.java:661)
at oracle.ide.dialogs.ProgressBar.run(ProgressBar.java:655)
at java.lang.Thread.run(Thread.java:619)
Caused by: oracle.jdeveloper.webservices.tools.WsdlValidationException: Error creating model from wsdl "http://localhost:8001/xx/som/contracts/CustomerContract?wsdl": A class/interface with the same name "com.xx.gpsc.som.core.schema.somcommon.v1.TaskCompletionMessage" is already in use. Use a class customization to resolve this conflict.(Relevant to above error) another "TaskCompletionMessage" is generated from here.(Relevant to above error) another "SOMMessage" is generated from here.A class/interface with the same name "com.xx.gpsc.som.core.schema.somcommon.v1.SOMMessage" is already in use. Use a class customization to resolve this conflict.Two declarations cause a collision in the ObjectFactory class.(Related to above error) This is the other declaration.
at oracle.jdevimpl.webservices.tools.wsa.WsaAdaptor.newWsdlValidationException(WsaAdaptor.java:825)
at oracle.jdevimpl.webservices.tools.wsa.WsaAdaptor.getSeiInfo(WsaAdaptor.java:515)
at oracle.jdeveloper.webservices.tools.WebServiceTools.getSeiInfo(WebServiceTools.java:523)
at oracle.jdeveloper.webservices.model.java.JavaWebService.getSeiInfo(JavaWebService.java:1741)
at oracle.jdeveloper.webservices.model.java.JavaWebService.createPortTypes(JavaWebService.java:1496)
... 12 more
Caused by: oracle.j2ee.ws.common.tools.api.ValidationException: Error creating model from wsdl "http://localhost:8001/xx/som/contracts/CustomerContract?wsdl": A class/interface with the same name "com.xx.gpsc.som.core.schema.somcommon.v1.TaskCompletionMessage" is already in use. Use a class customization to resolve this conflict.(Relevant to above error) another "TaskCompletionMessage" is generated from here.(Relevant to above error) another "SOMMessage" is generated from here.A class/interface with the same name "com.xx.gpsc.som.core.schema.somcommon.v1.SOMMessage" is already in use. Use a class customization to resolve this conflict.Two declarations cause a collision in the ObjectFactory class.(Related to above error) This is the other declaration.
at oracle.j2ee.ws.tools.wsa.jaxws.JaxwsWsdlToJavaTool.getJAXWSModel(JaxwsWsdlToJavaTool.java:664)
at oracle.j2ee.ws.tools.wsa.WsdlToJavaTool.createJAXWSModel(WsdlToJavaTool.java:475)
at oracle.j2ee.ws.tools.wsa.Util.getJaxWsSeiInfo(Util.java:1357)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at oracle.jdevimpl.webservices.tools.wsa.Assembler$2$1.invoke(Assembler.java:218)
at $Proxy50.getJaxWsSeiInfo(Unknown Source)
at oracle.jdevimpl.webservices.tools.wsa.WsaAdaptor.getSeiInfo(WsaAdaptor.java:505)
... 15 more
Caused by: oracle.j2ee.ws.common.tools.api.ValidationException: A class/interface with the same name "com.xx.gpsc.som.core.schema.somcommon.v1.TaskCompletionMessage" is already in use. Use a class customization to resolve this conflict.(Relevant to above error) another "TaskCompletionMessage" is generated from here.(Relevant to above error) another "SOMMessage" is generated from here.A class/interface with the same name "com.xx.gpsc.som.core.schema.somcommon.v1.SOMMessage" is already in use. Use a class customization to resolve this conflict.Two declarations cause a collision in the ObjectFactory class.(Related to above error) This is the other declaration.
at oracle.j2ee.ws.tools.wsa.SchemaTool.genValueTypes(SchemaTool.java:188)
at oracle.j2ee.ws.tools.wsa.jaxws.JaxwsWsdlToJavaTool.getJAXWSModel(JaxwsWsdlToJavaTool.java:647)
... 24 more
Caused by: oracle.j2ee.ws.common.databinding.common.spi.DatabindingException: A class/interface with the same name "com.xx.gpsc.som.core.schema.somcommon.v1.TaskCompletionMessage" is already in use. Use a class customization to resolve this conflict.(Relevant to above error) another "TaskCompletionMessage" is generated from here.(Relevant to above error) another "SOMMessage" is generated from here.A class/interface with the same name "com.xx.gpsc.som.core.schema.somcommon.v1.SOMMessage" is already in use. Use a class customization to resolve this conflict.Two declarations cause a collision in the ObjectFactory class.(Related to above error) This is the other declaration.
at oracle.j2ee.ws.common.tools.databinding.jaxb20.JAXB20TypeGenerator.generateJavaTypes(JAXB20TypeGenerator.java:120)
at oracle.j2ee.ws.tools.wsa.SchemaTool.genValueTypes(SchemaTool.java:186)
... 25 moreHi Yatan
The error is mostly there may be some Duplicate variable/schema element decalared in the wsdl or the xsd referred in the wsdl. Like in WSDL for any Operations, most of the times, we use input and outputs as complex xsd element. We declare these xsd in the same file or in another file and import that in the .wsdl file. So check or validate your XSD file for any Duplicates.
In JDeveloper itself, I think, you can open XSD or WSDL and validate it from right click menu options like that.
Thanks
Ravi Jegga -
Error when creating web service client in netbeans
i tried to create a web service client from a wsdl and an error pops up:
web service client can not be created by jaxws:wsimport utility.
reason: com.sun.tools.xjc.api.schemacompiler.resetschema()v
There might be a problem during java artifacts creation: for example a name conflict in generated classes.
To detect the problem see also the error messages in output window.
You may be able to fix the problem in WSDL Customization dialog
(Edit Web Service Attributes action)
or by manual editing of the local wsdl or schema files, using the JAXB customization
(local wsdl and schema files are located in xml-resources directory).
end of error message
I am using netbeans 6.0 RC 2 and the bundled tomcat 6.0.13. Please help me.Hi Yatan
The error is mostly there may be some Duplicate variable/schema element decalared in the wsdl or the xsd referred in the wsdl. Like in WSDL for any Operations, most of the times, we use input and outputs as complex xsd element. We declare these xsd in the same file or in another file and import that in the .wsdl file. So check or validate your XSD file for any Duplicates.
In JDeveloper itself, I think, you can open XSD or WSDL and validate it from right click menu options like that.
Thanks
Ravi Jegga -
Import Process Slow .. Suggestions for Speedup ?
Hi,
I am doing an import of data from a dump file since the last 15 hours. The size of the database is 15GB on a 10.2.0.3.
The majority of the size is in one particular schema.
The requirement is to duplicate the schema into a new tablespace. I created the structure off of the DDL from an index file and then loading data using :
imp system/password buffer=125000000 file=............. log=....._rows.log fromuser=scott touser=alice rows=y ignore=y constraints=n indexes=n
Some how since the last 12 or so hours it is trying to load data into a table consisting of 6 million rows, and still hasnt completed.
I can see for a fact that the UNDO TS has crossed 6GB and the tablespace for the schema around 3GB.
The redolog is 50MB each of 3 groups and there is constant Checkpoint incomplete as well.
Now this is a test machine, but the machine where this is supposed to happen tomorrow is a 9.2.0.3 on a Solaris box.
Is there any way to speed up this process .. maybe stop all the logging ? What are the other ways I can import all this data faster than the current unpredictably slow process.
Thanks.If you are copying data within the same database, why not use CTAS or INSERT, with PARALLEL and APPEND ?
eg
ALTER SYSTEM SET PARALLEL_MAX_SERVERS=8;
CREATE TABLE newschema.table_a AS SELECT * FROM oldschema.table_a where 1=2;
ALTER TABLE newschema.table_a NOLOGGING;
ALTER TABLE newschema.table_a PARALLEL 2;
ALTER TABLE oldschema.table_b PARALLEL 2;
ALTER SESSION ENABLE PARALLEL DML;
INSERT /*+ APPEND */ INTO newschema.table_a SELECT * FROM oldschema.table_a ;
COMMIT;
ALTER TABLE oldschema.table_a NOPARALLEL;
CREATE INDEX newschema.table_a_ndx_1 ON newschema.table_a(col1,col2) PARALLEL 4 NOLOGGING;
ALTER TABLE newschema.table_a NOPARALLEL;
ALTER INDEX newschema.table_a_ndx_1 NOPARALLEL;and run multiple tables in parallel (table_a being in the above block of code, table_b being in another block of code running concurrently, those each block of code using 4 ParallelQuery operators.
Hemant K Chitale
http://hemantoracledba.blogspot.com -
Sequences on more shemas. How to use???
A big trouble using correctly sequences in oracle lite.
Since now i had only one olite publication linked to a schema on backend db, with many objects: tables, indexes, views and 5 sequences.
Example:
publication --> schema
schema.seq1, schema.seq2, ... , schema.seq5
reachable by olite client as:
db = $USER_PUBLICATION
Now i have to "duplicate" this schema and deploy another publication, adding a prefix.
Example:
new_publication -> new_schema
db = $USER_NEW_PUBLICATION
I found no problem for all objects but sequences,
because tables, index, etc had the right schema as "owner",
but i did not found any parameters like owner for sequences.
Explaining better,
i need to use the same sequence name: seq1, seq2, ..., seq5 for both publications,
referring to the right sequences on the right schema, but i'm not sure if (and how) this is possible.
Could you help me?
Thanks.
Danielewe had a similar issue when we changed our application from 'standard' oracle lite (complete and fast refresh PIs) to use queue based PIs.
We did a workaound for the problem by
1) do not include the sequences at all in the second application (you get erors if you do)
2) run the following (fusionbci2 is our new application, fusionbci is the old, and fusionbci2 is effectively a clone of fusionbci)
-- fusionbci2_sequences.sql
-- associate all sequences with the fusionbci2 application
DECLARE
l_fusionbci2 VARCHAR2(30);
-- this cursor get the internal publication id for the fusionbci2 application
CURSOR c_fusionbci2 IS
SELECT name
FROM cv$all_publications
WHERE name_template LIKE 'fusionbci2%';
BEGIN
OPEN c_fusionbci2;
FETCH c_fusionbci2 INTO l_fusionbci2;
CLOSE c_fusionbci2;
-- this insert creates copies of the sequence relationship entries for the fusionbci application
-- linked to fusionbci2
INSERT INTO c$pub_objects
(pub_name
,db_name
,obj_id
,obj_type
,obj_weight)
SELECT l_fusionbci2
,'fusionbci2'
,obj_id
,obj_type
,obj_weight
FROM c$pub_objects
WHERE db_name='fusionbci'
AND obj_type='NA_SEQ';
END;
NOTE this works, but is probably not recommended by oracle, and should only be used where individual clients will only use one of the applications at a time (if they use both the sequence ranges are likely to overlap) -
The progress of gather_schema_stats
hello,
database version oracle 10.2.4.0 hpux
we use following options of gather_schema_stats to gather statistics for a huge schema:
DBMS_STATS.gather_schema_stats ( ownname => '<SCHEMA>', -
estimate_percent =>DBMS_STATS.AUTO_SAMPLE_SIZE, -
cascade => true, -
granularity => 'ALL' );
how can i determine the progress of gather_schema_stats ? does a internal table exists to monitor this ?
we want / must now the status of progress ?
regards,You can query the V$SESSION_LONGOPS table. Something like
SELECT *
FROM v$session_longops
WHERE time_remaining > 0will show you all the sessions that have long-running operations. GATHER_SCHEMA_STATS will populate this view, as will other Oracle operations that take more than a few seconds (i.e. a full scan of a large table).
Justin -
Impdp ORA-31684: Object type USER:"USERNAME" already exists
Hi,
I use expdp/impdp to duplicate one schema in a database like this:
expdp system/manager@DB SCHEMAS=SCHEMANAME
then I drop destination Schema like this
drop user SCHEMANAME_NEW cascade;
create user SCHEMANAME_NEW identified by PW default tablespace TABLESPACENAME;
and impdp like this
impdp system/manager@DB FULL=Y REMAP_SCHEMA=SCHEMANAME:SCHEMANAME_NEW
and I get this error:
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"SCHEMANAME_NEW" already exists
I know that the import was successful, but this error breaks my hudson build.
I tried to add exclude like this
impdp system/manager@DB FULL=Y REMAP_SCHEMA=SCHEMANAME:SCHEMANAME_NEW exclude=USER:\"=\'SCHEMANAME_NEW\'\"
I need to get rid of this error.
ThxYou get this error because you precreated the user. All you need to do is add
exclude=user
to either your expdp or impdp command. Or let impdp create the user for you. If you need it to have a different tablespace you can just use the
remap_tablespace=old_tbs:new_tbs
This should take care of the default tablespace on the create user command.
Dean
Maybe you are looking for
-
Ok, I have a k7t turbo limited edition board, with 256 mb ram and an Athlon 1600. I have my HD plugged up to IDE1 and whenever I do something large (like copy a big file) or do anything that uses the hard drive a lot, my hard drive either runs really
-
1st Generation iMac G5 with new main logic board but now no Bluetooth?
Greetings: As an owner of a 1st generation iMac, I can finally report that my iMacG G5 is now working fine since the main logic board has been replaced because of bad capacitors. A NEW problem has been created. My internal bluetooth no longer works a
-
Hi Gurus, Please help. Users are encountering error upon viewing of JPG in tcode CV03N. error "File c:\temp\img\_XXXX.jpg cannot be created" Please advice on what caused the error. Thank you.
-
Backup drive lost all images transferred over
The beginning of this month I helped a friend back up her photos onto an external drive. We moved over 10K photos and then around 400 music files. We then set up Time Machine to back up everything as it should. I checked her settings and compared the
-
Sun Compiler Compatibility with g++
I am new to C++ so I believe this is a basic question. I was given the job of compiling a C++ program to be compatible with Sun compiler 5.5 Patch 113817-19 2006/10/13 on Solaris 9 Sparc. Since I did not have the Sun One Studio 8 I compiled the progr