Metadata for full database
Hi,
We have full export (datapump) for oracle 10g and it is 2 month older and i need to drop the existing database and need to import the 2 month older dump.
so before i drop the databsae i need to take full metadata of the exisitng database. is there any command to take full metadata so that after droping the database i can create the database and do import.
appreciated the help
Thanks
user12266475 wrote:
Hi,
We have full export (datapump) for oracle 10g and it is 2 month older and i need to drop the existing database and need to import the 2 month older dump.
so before i drop the databsae i need to take full metadata of the exisitng database. is there any command to take full metadata so that after droping the database i can create the database and do import.
appreciated the help
Thanks1- I understand you have taken a full export (data + metadata) 2 months ago right?
2- Why you need to drop your database? what is your obligation? Dropping the actual database means you have to create a new one and database import will NOT create the new database for you.
Similar Messages
-
Is it possible to use markers in a Premiere Pro sequence such as Chapter / Comment / Segmentation and export the XMP metadata for a database so that when the video is used as a Video On-Demand resource, a viewer can do a keyword search and jump to a related point in the video?
take have to take turns
and you have to disable one and enable the other manually -
Estimating the backup size for full database backup?
How to estimate the backup file size for the following backup command?
Oracle 10g R2 on HPUX
BACKUP INCREMENTAL LEVEL 0 DATABASE;
Thank you,
SmithDepends on the number of used blocks, block size etc. You could probably get a rough formula for backup size based on the contents of dba_tab_statistics (subtract empty_blocks), dba_ind_statistics etc.
-
Full database exp/imp between RAC and single database
Hi Experts,
we have a RAC database oracle 10GR2 with 4 node in linux. i try to duplicate rac database into single instance window database.
there are same version both database. during importing, I need to create 4 undo tablespace to keep imp processing.
How to keep one undo tablespace in single instance database?
any experience of exp/imp RAC database into single instance database to share with me?
Thanks
Jim
Edited by: user589812 on Nov 13, 2009 10:35 AMJIm,
I also want to know can we add the exclude=tablespace on the impdp command for full database exp/imp?You can't use exclude=tablespace on exp/imp. It is for datapump expdp/impdp only.
I am very insteresting in your recommadition.
But for a full database impdp, how to exclude a table during full database imp? May I have a example for this case?
I used a expdp for full database exp. but I got a exp error in expdp log as ORA-31679: Table data object "SALE"."TOAD_PLAN_TABLE" has long columns, and longs can not >be loaded/unloaded using a network linkHaving long columns in a table means that it can't be exported/imported over a network link. To exclude this, you can use the exclude expression:
expdp user/password exclude=TABLE:"= 'SALES'" ...
This will exclude all tables named sales. If you have that table in schema scott and then in schema blake, it will exclude both of them. The error that you are getting is not a fatal error, but that table will not be exported/imported.
the final message as
Master table "SYSTEM"."SYS_EXPORT_FULL_01" successfully loaded/unloaded
Dump file set for SYSTEM.SYS_EXPORT_FULL_01 is:
F:\ORACLEBACKUP\SALEFULL091113.DMP
Job "SYSTEM"."SYS_EXPORT_FULL_01" completed with 1 error(s) at 16:50:26Yes, the fact that it did not export one table does not make the job fail, it will continue on exporting all other objects.
. I drop database that gerenated a expdp dump file.
and recreate blank database and then impdp again.
But I got lots of error as
ORA-39151: Table "SYSMAN"."MGMT_ARU_OUI_COMPONENTS" exists. All dependent metadata and data will be skipped due to table_exists_action of skip
ORA-39151: Table "SYSMAN"."MGMT_BUG_ADVISORY" exists. All dependent metadata and data will be skipped due to table_exists_action of skip
......ORA-31684: Object type TYPE_BODY:"SYSMAN"."MGMT_THRESHOLD" already exists
ORA-39111: Dependent object type TRIGGER:"SYSMAN"."SEV_ANNOTATION_INSERT_TR" skipped, base object type VIEW:"SYSMAN"."MGMT_SEVERITY_ANNOTATION" >already exists
and last line as
Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 2581 error(s) at 11:54:57Yes, even though you think you have an empty database, if you have installed any apps or anything, it may create tables that could exist in your dumpfile. If you know that you want the tables from the dumpfile and not the existing ones in the database, then you can use this on the impdp command:
impdp user/password table_exists_action=replace ...
If a table that is being imported exists, DataPump will detect this, drop the table, then create the table. Then all of the dependent objects will be created. If you don't then the table and all of it's dependent objects will be skipped, (which is the default).
There are 4 options with table_exists_action
replace - I described above
skip - default, means skip the table and dependent objects like indexes, index statistics, table statistics, etc
append - keep the existing table and append the data to it, but skip dependent objects
truncate - truncate the existing table and add the data from the dumpfile, but skip dependent objects.
Hope this helps.
Dean -
Sql Server Management Assistant (SSMA) Oracle okay for large database migrations?
All:
We don't have much experience with the SSMA (Oracle) tool and need some advice from those of you familiar with it. We must migrate an Oracle 11.2.0.3.0 database to SQL Server 2014. The Oracle database consists of approximately 25,000 tables and 30,000
views and related indices. The database is approximately 2.3 TB in size.
Is this do-able using the latest version of SSMA-Oracle? If so, how much horsepower would you throw at this to get it done?
Any other gotchas and advice appreciated.
Kindest Regards,
Bill
Bill DavidsonHi
Bill,
SSMA supports migrating large database of Oracle. To migrate Oracle database to SQL Server 2014, you could use the latest version:
Microsoft SQL Server Migration Assistant v6.0 for Oracle. Before the migration, you should pay attention to the points below.
1.The account that is used to connect to the Oracle database must have at least CONNECT permissions. This enables SSMA to obtain metadata from schemas owned by the connecting user. To obtain metadata for objects in other schemas and then convert objects
in those schemas, the account must have the following permissions: CREATE ANY PROCEDURE, EXECUTE ANY PROCEDURE, SELECT ANY TABLE, SELECT ANY SEQUENCE, CREATE ANY TYPE, CREATE ANY TRIGGER, SELECT ANY DICTIONARY.
2.Metadata about the Oracle database is not automatically refreshed. The metadata in Oracle Metadata Explorer is a snapshot of the metadata when you first connected, or the last time that you manually refreshed metadata. You can manually update metadata
for all schemas, a single schema, or individual database objects. For more information about the process, please refer to the similar article:
https://msdn.microsoft.com/en-us/library/hh313203(v=sql.110).
3.The account that is used to connect to SQL Server requires different permissions depending on the actions that the account performs as the following:
• To convert Oracle objects to Transact-SQL syntax, to update metadata from SQL Server, or to save converted syntax to scripts, the account must have permission to log on to the instance of SQL Server.
• To load database objects into SQL Server, the account must be a member of the sysadmin server role. This is required to install CLR assemblies.
• To migrate data to SQL Server, the account must be a member of the sysadmin server role. This is required to run the SQL Server Agent data migration packages.
• To run the code that is generated by SSMA, the account must have Execute permissions for all user-defined functions in the ssma_oracle schema of the target database. These functions provide equivalent functionality of Oracle system functions, and
are used by converted objects.
• If the account that is used to connect to SQL Server is to perform all migration tasks, the account must be a member of the sysadmin server role.
For more information about the process, please refer to the similar article:
https://msdn.microsoft.com/en-us/library/hh313158(v=sql.110)
4.Metadata about SQL Server databases is not automatically updated. The metadata in SQL Server Metadata Explorer is a snapshot of the metadata when you first connected to SQL Server, or the last time that you manually updated metadata. You can manually update
metadata for all databases, or for any single database or database object.
5.If the engine being used is Server Side Data Migration Engine, then, before you can migrate data, you must install the SSMA for Oracle Extension Pack and the Oracle providers on the computer that is running SSMA. The SQL Server Agent service must also
be running. For more information about how to install the extension pack, see Installing Server Components (OracleToSQL). And when SQL Express edition is used as the target database, only client side data migration is allowed and server side data migration
is not supported. For more information about the process, please refer to the similar article:
https://msdn.microsoft.com/en-us/library/hh313202(v=sql.110)
For how to migrate Oracle Databases to SQL Server, please refer to the similar article:
https://msdn.microsoft.com/en-us/library/hh313159(v=sql.110).aspx
Regards,
Michelle Li -
I'd like to move an on-premesis SQL Server Database to SQL Azure. I've used SQL Mgmt Studio to Extract Data Tier Application and save my db as a dacpac file. Now I'm connected to my Azure server and I've chosen to Deploy Data Tier Application. I select my
dacpac and the deploy starts but then on the last step "Registering metadata for database" it times out. I've tried it a couple of times and each time the deployed database is there and appears to be fully populated, but I'm not sure if I can ignore
that error and continue. What is supposed to happen in that step, and should I expect it to fail when deploying to SQL Azure?
I'm following the steps here http://msdn.microsoft.com/en-us/library/hh694043.aspx in the Using Migration Tools > Data-tier Application DAC Package section, except that to deploy there's no SQL Mgmt Studio > Object Explorer [server]
> Management >"Data Tier Applications" node, so I'm deploying by right-clicking on the server name and choosing "Deploy Data-tier Application".
My (total) guess here is that it's deployed the database fine and it's doing whatever magic happens when you register a data tier application, except that it's not working for SQL Azure.
I'm running against a server created for the new Azure service tiers, not against a Web/Business edition server.
The full details of error I get are below.
thanks,
Rory
TITLE: Microsoft SQL Server Management Studio
Could not deploy package.
Warning SQL0: A project which specifies SQL Server 2008 as the target platform may experience compatibility issues with SQL Azure.
(Microsoft.SqlServer.Dac)
ADDITIONAL INFORMATION:
Unable to register data-tier application: Unable to reconnect to database: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. (Microsoft.Data.Tools.Schema.Sql)
Unable to reconnect to database: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. (Microsoft.Data.Tools.Schema.Sql)
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. (Microsoft SQL Server, Error: -2)
For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft%20SQL%20Server&ProdVer=11.00.9213&EvtSrc=MSSQLServer&EvtID=-2&LinkId=20476
The wait operation timed out
BUTTONS:
OKHello,
The registration process creates a DAC definition that defines the objects in the database, and register the DAC definition in the master system database in Windows Azure SQL Database.
Based on the error message, there is timeout error when connect to SQL Database. Did you deploy a large database? When move large data to Azure SQL Database, it is recommended to use SQL Server Integration Services (SSIS) and BCP utility.
Or you can try to create a client application with the Data-Tier Application Framework (DACFx) client tool to import database and handle connection loss it by re-establish the connection.
Reference: http://sqldacexamples.codeplex.com/
Regards,
Fanny Liu
If you have any feedback on our support, please click here.
Fanny Liu
TechNet Community Support -
DPM is Only Allowing Express Full Backups For a Database Set to Full Recovery Model
I have just transitioned my SQL backups from a server running SCDPM 2012 SP1 to a different server running 2012 R2. All backups are working as expected except for one. The database in question is supposed to be backuped up iwht a daily express
full and hourly incremental schedule. Although the database is set to full recovery model, the new DPM server says that recovery points will be created for that database based on the express full backup schedule. I checked the logs on the old DPM
server and the transaction log backups were working just fine up until I stopped protection the data source. The SQL server is 2008 R2 SP2. Other databases on the same server that are set to full recovery model are working just fine. If we
switch the recovery model of a database that isn't protected by DPM and then start the wizard to add it to the protection group it properly sees the difference when we flip the recovery model back and forth. We also tried switching the recovery model
on the failing database from full to simple and then back again, but to no avail. Both the SQL server and the DPM server have been rebooted. We have successfully set up transaction log backups in a SQL maintenance plan as a test, so we know the
database is really using the full recovery model.
Is there anything that someone knows about that can trigger a false positive for recovery model to backup type mismatches?I was having this same problem and appear to have found a solution. I wanted hourly recovery points for all my SQL databases. I was getting hourly for some but not for others. The others were only getting a recovery point for the Full Express
backup. I noted that some of the databases were in simple recovery mode so I changed them to full recovery mode but that did not solve my problem. I was still not getting the hourly recovery points.
I found an article that seemed to indicate that SCDPM did not recognize any change in the recovery model once protection had started. My database was in simple recovery mode when I added it (auto) to protection so even though I changed it to full recovery
mode SCDPM continued to treat it as simple.
I tested this by 1) verify my db is set to full recovery, 2) back it up and restore it with a new name, 3) allow SCDPM to automatically add it to protection over night, 4) verify the next day I am getting hourly recovery points on the copy of the db.
It worked. The original db was still only getting express full recovery points and the copy was getting hourly. I suppose that if I don't want to restore a production db with an alternate name I will have to remove the db from protection, verify
that it is set to full, and then add it back to protection. I have not tested this yet.
This is the article I read:
Article I read -
Full Database Backup in DB13 for MS SQL Server databases
Hello,
We have some SAP systems using the MS SQL Server database. I want to know if it is possible to setup the Full Database Backup option in transaction DB13 to store the files in a hard disk space.
I already did this type of configuration in SAP systems in Oracle databases and for this I used the init<SID>.sap where we can put all the configuration, for instance the disk path where we want to save the backup files! But for MS SQL Server databases, in DB13, I don´t know how to do that, when I add the "Full Database Backup" option in DB13, this show me some options but only for tape, but I don´t want to store the backup files in tapes, I want to store/save the files on the server disk itself or else to an external disk but I don´t see where I can set this way of storage!
This is what you see in DB13 (when using a MS SQL Server database) in "Full Database Backup" and as you see I only have options for TAPE not for a disk or other type of storage! How can I do that? (if that is possible):
Can you help me please?
Kind regards,
samid raifHello Raja
Sorry for the delay of my answer! Many thanks for that tip, it helps a lot and it solve the last error/problem that I reported here, in fact, the problem was we had named the device with two words containing a space. So we removed that space from the backup device (in SQL management studio) with one word only and the job runs successfully without any errors!
It works in Development and Quality systems but in our Production system (AM1) the Full backup option in DB13 stopped with the following error, in fact the error is on verification of backup, it means that the Full backup finished with success, but when it does the backup verification, the job in DB13 stops with an error, the following one:
***************************** SQL Job information ******************************
Jobname: SAP CCMS Full DB Backup of AM1 [20140417103726-4-103726]
Type: TSQL
DB-Name: AM1
For Run: 20140417 10:37:26
**************************** Job history information *****************************
Stepname: CCMS-step 1
Command: declare @exeStmt nvarchar(2000) exec am1.sap_backup_databases @dbList=
"AM1",@r3Db="AM1",@bDev="fullprd",@expDays= 27,@jobName= "SAP CCMS Ful
l DB Backup of AM1 [20140417103726-4-103726]",@bkupChecksum="Y",@bkupT
ype="F",@nativeBkup="N",@exeDate = "20140417103726",@bkupSim = "N",@fo
rmat = 0,@init = 0,@bkupDb = "AM1",@unload = 0,@exeStmt = @exeStmt OUT
PUT
Status: (success)
Message: 3014
Severity: 0
Duration: 4 hours(s) 41 min(s) 22 sec(s)
Last msg: Executed as user: am1. Processed 7434328 pages for database 'AM1', fil
e 'A01DATA1' on file 1. [SQLSTATE 01000] (Message 4035) Processed 315
1504 pages for database 'AM1', file 'A01DATA2' on file 1. [SQLSTATE 01
000] (Message 4035) Processed 4574152 pages for database 'AM1', file
'A01DATA3' on file 1. [SQLSTATE 01000] (Message 4035) Processed 44363
92 pages for database 'AM1', file 'A01DATA4' on file 1. [SQLSTATE 0100
0] (Message 4035) Processed 25598 pages for database 'AM1', file 'A01
LOG1' on file 1. [SQLSTATE 01000] (Message 4035) BACKUP DATABASE succ
essfully processed 19621974 pages in 16881.638 seconds (9.521 MB/sec).
[SQLSTATE 01000] (Message 3014). The step succeeded.
<------------- End of Job Step History --------------->
Stepname: CCMS-step 2
Command: declare @exeStmt nvarchar(2000) exec am1.sap_verify_backups @nativeBku
p = "N",@bkupSim = "N",@bDev = "fullprd",@bkupChecksum = "Y",@exeDate
= "20140417103726",@unload = 1,@dbCnt =1,@exeStmt = @exeStmt OUTPUT
Status: (failure)
Message: 3201
Severity: 16
Duration: 0 hours(s) 8 min(s) 30 sec(s)
Last msg: Executed as user: am1. Cannot open backup device 'fullprd(\\10.0.0.45\
backupsap\prd\prdfullqua.bak)'. Operating system error 1265(error not
found). [SQLSTATE 42000] (Error 3201). The step failed.
<------------- End of Job Step History --------------->
**************************** Job history information *****************************
Can you help me please with this one!? Can you tell me why in verification it stops with that error?
Kind regards,
samid raif -
SQL0964C The transaction log for the database is full
Hi,
i am planning to do QAS refresh from PRD system using Client export\import method. i have done export in PRD and the same has been moved to QAS then started imported.
DB Size:160gb
DB:DB2 9.7
os: windows 2008.
I facing SQL0964C The transaction log for the database is full issue while client import and regarding this i have raised incident to SAP then they replied to increase some parameter like(LOGPRIMARY,LOGSECOND,LOGFILSIZ) temporarily and revert them back after the import. Based on that i have increased from as below mentioned calculation.
the filesystem size of /db2/<SID>/log_dir should be greater than LOGFILSIZ*4*(Sum of LOGPRIMARY+LOGSECONDARY) KB
From:
Log file size (4KB) (LOGFILSIZ) = 60000
Number of primary log files (LOGPRIMARY) = 50
Number of secondary log files (LOGSECOND) = 100
Total drive space required: 33GB
To:
Log file size (4KB) (LOGFILSIZ) = 70000
Number of primary log files (LOGPRIMARY) = 60
Number of secondary log files (LOGSECOND) = 120
Total drive space required: 47GB
But still facing the same issue. Please help me to resolve the ASAP.
Last error TP log details:
3 ETW674Xstart import of "R3TRTABUFAGLFLEX08" ...
4 ETW000 1 entry from FAGLFLEX08 (210) deleted.
4 ETW000 1 entry for FAGLFLEX08 inserted (210*).
4 ETW675 end import of "R3TRTABUFAGLFLEX08".
3 ETW674Xstart import of "R3TRTABUFAGLFLEXA" ...
4 ETW000 [ dev trc,00000] Fri Jun 27 02:20:21 2014 -774509399 65811.628079
4 ETW000 [ dev trc,00000] *** ERROR in DB6Execute[dbdb6.c, 4980] CON = 0 (BEGIN) 85 65811.628164
4 ETW000 [ dev trc,00000] &+ DbSlModifyDB6( SQLExecute ): [IBM][CLI Driver][DB2/NT64] SQL0964C The transaction log for the database is full.
4 ETW000 83 65811.628247
4 ETW000 [ dev trc,00000] &+ SQLSTATE=57011 row=1
4 ETW000 51 65811.628298
4 ETW000 [ dev trc,00000] &+
4 ETW000 67 65811.628365
4 ETW000 [ dev trc,00000] &+ DELETE FROM "FAGLFLEXA" WHERE "RCLNT" = ?
4 ETW000 62 65811.628427
4 ETW000 [ dev trc,00000] &+ cursor type=NO_HOLD, isolation=UR, cc_release=YES, optlevel=5, degree=1, op_type=8, reopt=0
4 ETW000 58 65811.628485
4 ETW000 [ dev trc,00000] &+
4 ETW000 53 65811.628538
4 ETW000 [ dev trc,00000] &+ Input SQLDA:
4 ETW000 52 65811.628590
4 ETW000 [ dev trc,00000] &+ 1 CT=WCHAR T=VARCHAR L=6 P=9 S=0
4 ETW000 49 65811.628639
4 ETW000 [ dev trc,00000] &+
4 ETW000 50 65811.628689
4 ETW000 [ dev trc,00000] &+ Input data:
4 ETW000 49 65811.628738
4 ETW000 [ dev trc,00000] &+ row 1: 1 WCHAR I=6 "210" 34 65811.628772
4 ETW000 [ dev trc,00000] &+
4 ETW000 51 65811.628823
4 ETW000 [ dev trc,00000] &+
4 ETW000 50 65811.628873
4 ETW000 [ dev trc,00000] *** ERROR in DB6Execute[dbdb6.c, 4980] (END) 27 65811.628900
4 ETW000 [ dbtran ,00000] ***LOG BY4=>sql error -964 performing DEL on table FAGLFLEXA
4 ETW000 3428 65811.632328
4 ETW000 [ dbtran ,00000] ***LOG BY0=>SQL0964C The transaction log for the database is full. SQLSTATE=57011 row=1
4 ETW000 46 65811.632374
4 ETW000 [ dev trc,00000] dbtran ERROR LOG (hdl_dbsl_error): DbSl 'DEL' 59 65811.632433
4 ETW000 RSLT: {dbsl=99, tran=1}
4 ETW000 FHDR: {tab='FAGLFLEXA', fcode=194, mode=2, bpb=0, dbcnt=0, crsr=0,
4 ETW000 hold=0, keep=0, xfer=0, pkg=0, upto=0, init:b=0,
4 ETW000 init:p=0000000000000000, init:#=0, wa:p=0X00000000020290C0, wa:#=10000}
4 ETW000 [ dev trc,00000] dbtran ERROR LOG (hdl_dbsl_error): DbSl 'DEL' 126 65811.632559
4 ETW000 STMT: {stmt:#=0, bndfld:#=1, prop=0, distinct=0,
4 ETW000 fld:#=0, alias:p=0000000000000000, fupd:#=0, tab:#=1, where:#=2,
4 ETW000 groupby:#=0, having:#=0, order:#=0, primary=0, hint:#=0}
4 ETW000 CRSR: {tab='', id=0, hold=0, prop=0, max.in@0=1, fae:blk=0,
4 ETW000 con:id=0, con:vndr=7, val=2,
4 ETW000 key:#=3, xfer=0, xin:#=0, row:#=0, upto=0, wa:p=0X00000001421A3000}
2EETW125 SQL error "-964" during "-964" access: "SQL0964C The transaction log for the database is full. SQLSTATE=57011 row=1"
4 ETW690 COMMIT "14208" "-1"
4 ETW000 [ dev trc,00000] *** ERROR in DB6Execute[dbdb6.c, 4980] CON = 0 (BEGIN) 16208 65811.648767
4 ETW000 [ dev trc,00000] &+ DbSlModifyDB6( SQLExecute ): [IBM][CLI Driver][DB2/NT64] SQL0964C The transaction log for the database is full.
4 ETW000 75 65811.648842
4 ETW000 [ dev trc,00000] &+ SQLSTATE=57011 row=1
4 ETW000 52 65811.648894
4 ETW000 [ dev trc,00000] &+
4 ETW000 51 65811.648945
4 ETW000 [ dev trc,00000] &+ INSERT INTO DDLOG (SYSTEMID, TIMESTAMP, NBLENGTH, NOTEBOOK) VALUES ( ? , CHAR( CURRENT TIMESTAMP - CURRENT TIME
4 ETW000 50 65811.648995
4 ETW000 [ dev trc,00000] &+ ZONE ), ?, ? )
4 ETW000 49 65811.649044
4 ETW000 [ dev trc,00000] &+ cursor type=NO_HOLD, isolation=UR, cc_release=YES, optlevel=5, degree=1, op_type=15, reopt=0
4 ETW000 55 65811.649099
4 ETW000 [ dev trc,00000] &+
4 ETW000 49 65811.649148
4 ETW000 [ dev trc,00000] &+ Input SQLDA:
4 ETW000 50 65811.649198
4 ETW000 [ dev trc,00000] &+ 1 CT=WCHAR T=VARCHAR L=44 P=66 S=0
4 ETW000 47 65811.649245
4 ETW000 [ dev trc,00000] &+ 2 CT=SHORT T=SMALLINT L=2 P=2 S=0
4 ETW000 48 65811.649293
4 ETW000 [ dev trc,00000] &+ 3 CT=BINARY T=VARBINARY L=32000 P=32000 S=0
4 ETW000 47 65811.649340
4 ETW000 [ dev trc,00000] &+
4 ETW000 50 65811.649390
4 ETW000 [ dev trc,00000] &+ Input data:
4 ETW000 49 65811.649439
4 ETW000 [ dev trc,00000] &+ row 1: 1 WCHAR I=14 "R3trans" 32 65811.649471
4 ETW000 [ dev trc,00000] &+ 2 SHORT I=2 12744 32 65811.649503
4 ETW000 [ dev trc,00000] &+ 3 BINARY I=12744 00600306003200300030003900300033003300310031003300320036003400390000...
4 ETW000 64 65811.649567
4 ETW000 [ dev trc,00000] &+
4 ETW000 52 65811.649619
4 ETW000 [ dev trc,00000] &+
4 ETW000 51 65811.649670
4 ETW000 [ dev trc,00000] *** ERROR in DB6Execute[dbdb6.c, 4980] (END) 28 65811.649698
4 ETW000 [ dbsyntsp,00000] ***LOG BY4=>sql error -964 performing SEL on table DDLOG 36 65811.649734
4 ETW000 [ dbsyntsp,00000] ***LOG BY0=>SQL0964C The transaction log for the database is full. SQLSTATE=57011 row=1
4 ETW000 46 65811.649780
4 ETW000 [ dbsync ,00000] ***LOG BZY=>unexpected return code 2 calling ins_ddlog 37 65811.649817
4 ETW000 [ dev trc,00000] db_syflush (TRUE) failed 26 65811.649843
4 ETW000 [ dev trc,00000] db_con_commit received error 1024 in before-commit action, returning 8
4 ETW000 57 65811.649900
4 ETW000 [ dbeh.c ,00000] *** ERROR => missing return code handler 1974 65811.651874
4 ETW000 caller does not handle code 1024 from dblink#5[321]
4 ETW000 ==> calling sap_dext to abort transaction
2EETW000 sap_dext called with msgnr "900":
2EETW125 SQL error "-964" during "-964" access: "SQL0964C The transaction log for the database is full. SQLSTATE=57011 row=1"
1 ETP154 MAIN IMPORT
1 ETP110 end date and time : "20140627022021"
1 ETP111 exit code : "12"
1 ETP199 ######################################
Regards,
RajeshHi Babu,
I believe you should have taken a restart of your system if log primary are changed. If so, then increase log primary to 120 and secondary to 80 provide size and space are enough.
Note 1293475 - DB6: Transaction Log Full
Note 1308895 - DB6: File System for Transaction Log is Full
Note: 495297 - DB6: Monitoring transaction log
Regards,
Divyanshu -
I recently created a PowerPivot workbook that got pushed out to my enterprise's tabular server. And while our current IT policies keep me from being able to be an SSAS Administrator, I
did get included in the Admin role for the database itself (not the server.) And that's where I'm a little lost.
The database permissions for the Admin role are set for Full control, Process database, and Read. And yes, I can read and process the database. But is it possible for me to make any changes to the model itself? The only way I could think of to make changes
was to suck the model back down into Visual Studio, edit, and redeploy. But it looks like deploying requires being an Admin of the
server to push the metadata.
I'd love to know if I'm understanding things correctly.
Thanks so much,
Henson
**EDIT** Looking at this page, it seems that all you can really do on a database level is delete it or change the properties, not the model itself: http://msdn.microsoft.com/en-us/library/ms175596.aspxHi Techhenson,
According to your description, you want to know if you can change the SQL Server Analysis Services tabular model as a database administrator, right?
I have tested it on my local environemnt, as a database administrator, we can change the tabular model properties in SQL Server Management Studio (SSMS). If we need to change the structure of the tabular model, we need to create a Import Project to import
the model on SQL Server Data Tolls (SSDT), and then modify the model and redeploy it to the server.
Regards,
Charlie Liao
TechNet Community Support -
Aperture 3 - Copy/paste metadata to external database
Hi all, I'm trying to create an external database of the lat., lon, direction, date and other metadata for ~1000 photos. I can see see the metadata alongside the image in browser mode or have it appear at the bottom of the image in full-screen view. I'd like to be able to a) create a table to those data to a text file that I can put in a spreadsheet (fast way) or, b) be able to copy/paste those data separately into the spreadsheet. As it stands now, I cannot do either. I cannot highlight and copy the text. Does someone know where those data are stored or have a solution?
I appreciate your help.I have the same issue. This is what I get when exorting the data:
Version Name Title Urgency Categories Suppl. Categories Keywords Instructions Date Created Contact Creator Contact Job Title City State/Province Country Job Identifier Headline Provider Source Copyright Notice Caption Caption Writer Rating IPTC Subject Code Usage Terms Intellectual Genre IPTC Scene Location ISO Country Code Contact Address Contact City Contact State/Providence Contact Postal Code Contact Country Contact Phone Contact Email Contact Website Label Latitude Longitude Altitude AltitudeRef
DSC09894 0
What am I missing that actually will have it export the real values? -
Import all users and their objects without doing full database import
Hi Guys,
I have a task to that involves Importing all existing users and their objects in production to the test database without doing a full database import - Please how do i do this?
Pls i need your help urgently.SQL> select * from v$version;
BANNER
Oracle Database 10g Express Edition Release 10.2.0.1.0 - Product
PL/SQL Release 10.2.0.1.0 - Production
CORE 10.2.0.1.0 Production
TNS for Linux: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production
I tried to import objects and data from a user from a FULL dump file. File was created with the following command:
server is: SQL*Plus: Release 10.2.0.1.0 - Production on Wed May 26 15:34:05 2010
exp full=y FILE="full.dmp" log=full.log
Now I imported:
imp file=full.dmp log=full.log INCTYPE=SYSTEM
imp fromuser=user1 file=full.dmp
Results: not all the user procedures have been imported:
SQL> select count(*) from user_procedures;
the Original
COUNT(*)
134
the current:
select count(*) from user_procedures;
COUNT(*)
18
I also tried these alternatives:
exp tablespaces="user1_data" FILE="user1.dmp" log=user1.log
exp LOG=user1.log TABLESPACES=user1_data FILE=user1_data.dmp
exp LOG=user1owner.log owner=user1 FILE=user1owner.dmp
expdp DIRECTORY=dpump_dir1 dumpfile=servdata_pump version=compatible SCHEMAS=user1
impdp directory=data_pump_dir dumpfile=servdata_pump.dmp :
ORA-39213: Metadata processing is not available
SQL> execute dbms_metadata_util.load_stylesheets
BEGIN dbms_metadata_util.load_stylesheets; END;
ERROR at line 1:
ORA-31609: error loading file "kualter.xsl" from file system directory
"/usr/lib/oracle/xe/app/oracle/product/10.2.0/server/rdbms/xml/xsl"
ORA-06512: at "SYS.DBMS_METADATA_UTIL", line 1793
ORA-06512: at line 1
file kualter.xsl does not exist in XE !!
imp owner=user1 rows=n ignore=y
imp full=y file=user1_data.dmp
imp full=y file=full.dmp tablespaces=user1_data,user1_index rows=n ignore=y
So, I do not understand why user1 objects are not imported:
see this part of the first import log:
Export file created by EXPORT:V10.02.01 via conventional path
import done in US7ASCII character set and AL16UTF16 NCHAR character set
import server uses WE8ISO8859P1 character set (possible charset conversion)
. importing SYS's objects into SYS
. importing USER1's objects into USER1
. . importing table .........................
why only 18 rows?
if you have an suggestion, you are welcome, as I do not have any other idea...
ren
Edited by: ronpetitpatapon on May 26, 2010 12:38 PM
Edited by: ronpetitpatapon on May 26, 2010 12:41 PM
Edited by: ronpetitpatapon on May 26, 2010 1:03 PM -
How to choose a right server for Oracle Database!
Hi all,
Is there any formular that allows me to find the appropriate server configuration for Oracle database 11g?
Please help!
Thank you all.
Dan.
Edited by: Dan on 01:45 06-01-2013Simply put ... no.
Determining the correct server is a dance between hardware vendors (one point of input to be treated with general distrust), internal expertise and experience working with similar systems, and most importantly understanding the behaviour of what you are building.
Here are some sample questions to start the ball rolling.
1. What operating system expertise does your organization have?
2. What operating system expertise is readily available in the community from which you hire applicants?
3. What type of application?
4. What are up-time expectations?
5. Requirements for high availability?
6. Requirements for time to perform a full recovery?
7. Requirements for security?
8. Storage footprint (dictates internal storage versus SAN, NAS, or ZFS)
9. Anticipated growth (3-7 years)
Everyone wants to sell you CPU clock-ticks, more RAM, more IOPS, etc. But the first thing you need to understand is the capabilities you require that are generic: Required of any solution. Only when you have a clear understanding of these factors are you ready to discuss whether the solution is a pizza box or an ODA ... an Exadata or an IBM System Z.
Far too often I find people purchasing RAM they don't need with CPU speeds that exceed the ability of their network infrastructure to move the data so pay special attention when you are ready to make your purchase to engineered systems like the ODA. -
Help: Creating a cluster for Oracle Database
In school, we've been testing the grid infraestructure, ASM and oracle database
but now, for a research, we should be able to install, Grid+Asm and oracle database Using at least 3 computers
where should we start?
we want to install all of them since the begining, I mean, A network. Operating system on each computer, disk slices for the ASM, and the Oracle database.
thanks in advance!1. Install OS
2. Set IPs on DNS
3. Prepare the disk storage for ASM
4. create users,groups and directory,
5. install package requirements (see documentations or MOS)
6. runclufy to verify if the systems are ready to be installed
7. install grid infra
8. create diskgroup in asm
9. install oracle software
10. create database
11. verify installation
and the last one is please read http://download.oracle.com/docs/cd/E11882_01/install.112/e17214/toc.htm for full details. post here if you encounter problems
Cheers
Edited by: FZheng on May 21, 2011 7:18 PM -
How can I read out metadata for captions from a Canon photo
I have to tried to find an answer via Adobe and Google. Still not able to figure out on how to read out XMP metadata for captions out of my photos. Checking the XMP data in InDesign shows info on Camera Make, Exposure etc. But if I use metadata fields from "Caption Setup" like "Camera" the caption shows <No data from link>. Fields where the XMP field name matches (e.g. "Lens") are ok. I need following fields for the caption: Camera Make, Exposure (aperture and speed) and ISO.
Lets keep it simple then.
If I take my utf file generated and open and view it in notepad I can see the accented characters. But if I open it in Wordpad , the accented characetrs are corrupted.
If I then save the file specifiying type Ascii, then the characters are written out correctly.
What I want to do is to be able to write out the file in ascii format without having to open it in utf-mode and then having to save it in ascii.
Ie I want the file to be opened in ascii format
All the characters to be written in Ascii format
But the source is still a unicode database.
I have gone through using convert and characters get lost. In fact, at this stage, I'm not sure its possible to do What I need to do.
Remember I am using an 8 bit character set which is why I have values above 127.
So basically if you take the word 'Annulé'
if I view it in wordpad it displays as
Annulé
But if I view it in notepad it displays as
'Annulé' which is because notepad detects that the file has a utf-8 character in it.
When I save it as type ascii I can then open it correctly in wordpad.
So I basically want to open this file in Wordpad and have it display Annulé rather than the garbled characters,
without having to go through the process of opening and saving the file as type ansi.
Maybe you are looking for
-
How to Zip Excel files using File Adapter?
Hi, We have tried to ZIP the Excel file with PayloadZipBean in File adapter. But we faced some issue while zipping. We have seen some zunk data in excel file after zipping with PayloadZipBean. Someone please help how to zip Excel files in PI with Fi
-
My password for itunes wont work
Password now will not work and i just reset it and still wont work
-
Custom UME attribute with pre-defined values
All, Is it possible to define a custom UME attribute which will have pre-defined values so that it appears as dropdown select when the admin creates a user? Your help is appreciated. Thanks
-
"IR Hardware Not Detected" message when trying to set up TV on Idea Center A700
When I am trying to set up my computer to get TV from my set top box the computer identifies my signal and shows a small picture of the live TV. When I click on next I get a message - IR Hardware not detected. Can someone please tell me to what thi
-
is that university of honolulu is valid in USA?.... they offer course of B.tech information and communication technology degree course for 4000$ in Sri Lanka.is it worth for that cost and there are more cheaters, sometimes may be they are selling uni