Standby databse for a query database

Hi,
Can I use a standby database to make a query database.
My idee is :
create a standby database on production,
daily open this one in read/only,
and nightly recover standby database.
Is it available ?
thanks in advance for your help.
gnom92.

There is no rule of thumb. It really depends on how much REDO is generated by the primary database. If the primary is CPU-bound processing transactions 24 hours a day, it will take a lot longer to recover the standby than it would if the primary has a moderate transaction volume.
If you have tested your recovery plan recently, you can probably make a pretty good estimate, since that is basically what the standby is doing. If it takes you an hour to apply a day's worth of archive logs to recover your system, it will take an hour to recover your standby database (assuming identical hardware & software configuration).
Justin
Distributed Database Consulting, Inc.
http://www.ddbcinc.com/askDDBC

Similar Messages

  • Significant difference in response times for same query running on Windows client vs database server

    I have a query which is taking a long time to return the results using the Oracle client.
    When I run this query on our database server (Unix/Solaris) it completes in 80 seconds.
    When I run the same query on a Windows client it completes in 47 minutes.
    Ideally I would like to get a response time equivalent on the Windows client to what I get when running this on the database server.
    In both cases the query plans are the same.
    The query and plan is shown below :
    {code}
    SQL> explain plan
      2  set statement_id = 'SLOW'
      3  for
      4  SELECT DISTINCT /*+ FIRST_ROWS(503) */ objecttype.id_object
      5  FROM documents objecttype WHERE objecttype.id_type_definition = 'duotA9'
      6  ;
    Explained.
    SQL> select * from table(dbms_xplan.display('PLAN_TABLE','SLOW','TYPICAL'));
    PLAN_TABLE_OUTPUT
    | Id  | Operation          | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)|
    |   0 | SELECT STATEMENT   |           |  2852K|    46M|       | 69851   (1)|
    |   1 |  HASH UNIQUE       |           |  2852K|    46M|   153M| 69851   (1)|
    |*  2 |   TABLE ACCESS FULL| DOCUMENTS |  2852K|    46M|       | 54063   (1)|
    {code}
    Are there are configuration changes that can be done on the Oracle client or database to improve the response times for the query when it is running from the client?
    The version on the database server is 10.2.0.1.0
    The version of the oracle client is also 10.2.0.1.0
    I am happy to provide any further information if required.
    Thank you in advance.

    I have a query which is taking a long time to return the results using the Oracle client.
    When I run this query on our database server (Unix/Solaris) it completes in 80 seconds.
    When I run the same query on a Windows client it completes in 47 minutes.
    There are NO queries that 'run' on a client. Queries ALWAYS run within the database server.
    A client can choose when to FETCH query results. In sql developer (or toad) I can choose to get 10 rows at a time. Until I choose to get the next set of 10 rows NO rows will be returned from the server to the client; That query might NEVER complete.
    You may get the same results depending on the client you are using. Post your question in a forum for whatever client you are using.

  • Help for: ORA-01103: database name PRIMARY in control file is not STANDBY

    Hello all, this will be my first post to the support forum. I'm an associate dba with just 6 months on the job, so if I've forgotten something or not given some infromation that is needed please let me know.
    I've also combed the forums/internet, and some of the answers haven't helped. The Oracle Document ORA-1103 While Mounting the Database Using PFILE [ID 237073.1] says my init.ora file is corrupted, but creating a new init.ora file from the spfile does not help. Neither does just starting from the spfile. I have older copies of the init.ora file and the spfiles that the database was running on previously, so I believe they are good.
    This standby NIRNASD1 has existed previously, I had to refresh the primary NIKNASD2, and then re-instantiate NIRNASD1 after the refresh is complete.
    My env is set correctly, and my ORACLE_SID has been exported to NIRNASD1
    NIKNASD2 = Primary Database
    NIRNASD1 = Secondary/Standby Database
    Goal: Creation of Logical Standby NIRNASD1 after creating Physical Standby from NIKNASD2
    My database versions are 10.2.0.4.0, and the databases are on a Unix server. Both databases are located on separate servers.
    Steps that I have taken:
    I used RMAN to backup our primary database to the staging area:
    $ rman target /
    run {
    backup database
    format '/datatransa/dg_stage/%U'
    include current controlfile for standby;
    sql "alter system archive log current";
    backup archivelog all format '/datatransa/dg_stage/%U';
    I used RMAN to Create Secondary Database utilizing RMAN DUPLICATE command.
    RMAN> run {
    2> allocate auxiliary channel auxdisk device type disk;     
    3> duplicate target database for standby NOFILENAMECHECK;
    4> }
    On Secondary database I started Managed Recovery mode
    SQL> shutdown immediate;
    ORA-01109: database not open
    Database dismounted.
    ORACLE instance shut down.
    (I used pfile here, thinking that I needed to mount the database to the pfile so that the database would see the change in the dataguard parameters in the init.ora file, the change from logical to physical- I commeneted out the logical and uncommented the physical line)
    # Dataguard Parameters
    For logical standby, change db_name to name of standby database.
    db_name=NIKNASD2 ### for physical, db_name is same as primary
    #db_name=NIRNASD1 ### for logical, db_name is same as unique_name
    SQL> STARTUP MOUNT PFILE = /oraa/app/oracle/product/1020/admin/NIRNASD1/pfile/initNIRNASD1.ora;
    ORACLE instance started.
    Total System Global Area 1577058304 bytes
    Fixed Size 2084368 bytes
    Variable Size 385876464 bytes
    Database Buffers 1174405120 bytes
    Redo Buffers 14692352 bytes
    Database mounted.
    SQL> ALTER DATABASE recover managed standby database using current logfile disconnect;
    I then verified the Data Guard Configuration by using “alter system archive log current;” on the primary database and watching the sequence number change in the secondary database.
    I made sure that:
    •     The primary database was in MAXIMUM PERFORMANCE MODE
    •     Stopped managed recover on the standby database: alter database recover managed standby database cancel;
    •     Built a logical standby data dictionary on the primary database
    •     The db_name in init.ora was changed (this is in our document at my job)
    •     I changed my database name (from physical to logical) in my init.ora pfile (reverse of what I did above)
    # Dataguard Parameters
    For logical standby, change db_name to name of standby database.
    #db_name=NIKNASD2 ### for physical, db_name is same as primary
    db_name=NIRNASD1 ### for logical, db_name is same as unique_name
    I then went to shutdown my standby database and re-start it in a mount exclusive state, which is where I get the ORA-01103 Error (Again I used the pfile, thinking that I needed to tell the database it is now a logical standby):
    SQL> shutdown immediate;
    ORA-01109: database not open
    Database dismounted.
    ORACLE instance shut down.
    SQL> STARTUP EXCLUSIVE MOUNT PFILE = /oraa/app/oracle/product/1020/admin/NIRNASD1/pfile/initNIRNASD1.ora;
    ORACLE instance started.
    Total System Global Area 1577058304 bytes
    Fixed Size 2084368 bytes
    Variable Size 385876464 bytes
    Database Buffers 1174405120 bytes
    Redo Buffers 14692352 bytes
    ORA-01103: database name 'NIKNASD2' in control file is not 'NIRNASD1'
    From what I understand of the process, the name in the control file is correct, I want it to be NIRNASD1. But the database for some reason thinks it should be NIKNASD2. The following are the parts of my init.ora file that include the dataguard parameters:
    # Database Identification
    db_domain=""
    #db_name=NIRNASD1
    #db_unique_name=NIRNASD1
    # File Configuration
    control_files=("/oradba2/oradata/NIRNASD1/control01.ctl", "/oradba3/oradata/NIRNASD1/control02.ctl", "/oradba4/oradata/NIRNASD1/control03.ctl")
    # Instance Identification
    instance_name=NIRNASD1
    # Dataguard Parameters
    #db_name=NIKNASD2 ### for physical, db_name is same as prmary
    db_name=NIRNASD1 ### for logical, db_name is same as unique_name
    db_unique_name=NIRNASD1
    dg_broker_start=TRUE
    db_file_name_convert='NIKNASD2','NIRNASD1'
    log_file_name_convert='NIKNASD2','NIRNASD1'
    log_archive_config='dg_config=(NIRNASD1,NIKNASD2)'
    log_archive_dest_1='LOCATION="/oraarcha/NIRNASD1/" valid_for=(ONLINE_LOGFILES,all_roles) db_unique_name=NIRNASD1'
    #log_archive_dest_2='LOCATION="/oraarcha/NIKNASD2/" valid_for=(standby_logfiles,standby_roles) db_unique_name=NIRNASD1'
    log_archive_dest_2='LOCATION="/oraarcha/NIKNASD2/" valid_for=(standby_logfile,standby_role) db_unique_name=NIRNASD1'
    STANDBY_ARCHIVE_DEST='LOCATION=/oraarcha/NIKNASD2/'
    # Parameters are not needed since this server will NOT become primary
    #log_archive_dest_2='service=NIKNASD2
    # valid_for=(online_logfiles,primary_role)
    # db_unique_name=NIKNASD2'
    fal_server='NIKNASD2'
    fal_client='NIRNASD1'
    I would appreciate any help, or pointing me in the right direction. I'm just missing something. I am reviewing the documents for building a physical and logical standby from oracle. Just not sure where to go from here.
    Thank you
    Edited by: 977917 on Dec 19, 2012 5:49 PM

    First of all, thank you both for answering my post. I've pulled up Mr. Hesse's page and will make it a go-to staple.
    We're in the process of upgrading our databases, but we have 130+ databases and only six Oracle dba's, and I'm one of them. It's a large corporation, and things move at a "slow and tested" pace.
    The pfile parameters listed above are from my secondary/standby database. And I do want to create a logical standby.
    I forgot to mention that we do use DataGuard Broker, but I did not think that would be the cause of why the database was starting up incorrectly, so I did not mention it. My apologies there.
    As far as the db_name, here's my question on that. It's my understanding the the db_name should be the name of the primary database when you are working with a physical standby, but as soon as you convert it to logical, you should change the db_name to the secondary/standby database? Am I correct on that?
    Leading from that, during the process of creating the physical standby and converting the physical standby to the logical standby, should I change the db_name in the secondary/standby database in the spfile and never use the pfile at all? For instance, when I create the physical standby I have to change the db_name in the standby to the PRIMARY database, so that makes me think I should change db_name in the spfile? (If you see above, I changed db_name in the pfile and did a startup pfile)
    This morning I was able to reach out to a fellow DBA (they are were asleep when I posted this last night), and they tried a few things. We had a redirect in the standby directory /oraa/app/oracle/product/1020/dbs folder that looked like this: spfileNIRNASD1.ora -> /oraa/app/oracle/product/1020/admin/NIRNASD1/pfile/spfileNIRNASD1.ora
    She removed the redirect and the startup mount exclusive then worked without the error.
    Thank you again for your help Mr.Quluzade and Mr. Hesse, I appreciate you all taking the time to teach someone new to the craft. I will definitely read up on the link that you sent me.
    Chris Cranford

  • Standalone Standby Creation for RAC database

    Hi,
    I am in process of configuring standalone standby for my RAC database. The database version is 11.2.0.3. RAC is using SCAN listeners. I have got MOS for configuring standalone standby for RAC 387339.1. But specific steps I need to perform in case of SCAN listeners.
    Any document or link is appreciated.
    Regards,

    Hello;
    The white paper "Rapid Oracle RAC Standby Deployment: Oracle Database 11g Release 2" has a section on this.
    http://www.oracle.com/technetwork/database/features/availability/maa-wp-11g-rac-standby-133152.pdf
    This may also help :
    http://www.oracle.com/technetwork/database/availability/maa-wp-11g-racone-standby-501088.pdf
    And these :
    Configuring and Administering Oracle Net Listener
    http://docs.oracle.com/cd/E11882_01/network.112/e10836/listenercfg.htm
    srvctl relocate scan
    http://download.oracle.com/docs/cd/E11882_01/rac.112/e16795/srvctladmin.htm#RACAD7499
    srvctl relocate scan_listener
    http://download.oracle.com/docs/cd/E11882_01/rac.112/e16795/srvctladmin.htm#CDDIDDCF
    Troubleshooting Oracle Clusterware
    http://download.oracle.com/docs/cd/E11882_01/rac.112/e16794/troubleshoot.htm#CHDFJIEG
    From Oracle support :
    How to Setup SCAN Listener and Client for TAF and Load Balancing [Video] [ID 1188736.1]
    Best Regards
    mseberg

  • Query for  to get  database size, used size, freesize, db size after drop

    Pls give me a query for to get database size, used size, freesize, tablespacesize of sometables which is starting with 'RQ%'
    I have to get result like this
    Total size, used size, free size ,tablespace size of RQ tables alone
    Reason why i go for "tablespace size of RQ tables" i want to know the size of database after deleting the rq tables
    Pls reply
    S

    i tried with
    SELECT
    --fs.tablespace_name name,
    df.totalspace/1024/1024 mbytes,
    (df.totalspace - fs.freespace)/1024/1024 used,
    fs.freespace/1024/1024 free
    FROM
    (SELECT
    --tablespace_name,
    ROUND(SUM(bytes)) TotalSpace
    FROM
    dba_data_files
    ) df,
    (SELECT
    --tablespace_name,
    ROUND(SUM(bytes)) FreeSpace
    FROM
    dba_free_space
    ) fs
    i AM GETTING total bytes, used bytes, free bytes
    I WANT TO include one more column.. database size after deleting rq tables
    Pls reply
    S
    Edited by: AswinGousalya on Jul 10, 2009 2:17 PM

  • Error in adding datafile in standby databse.

    Hi all.
    My Environment is as below:
    Oracle-8.1.7.4.0
    OS-HP Unix-11
    Primary database (only 1): Production
    Standby database: Different Machine but same location (HP box)
    Yesterday I have added 2 datafiles to the two different tablespace. I have checked file is available at Production box and one of the file also avilable at standby databse.
    When I am following steps for applying redo log to the standby manually.
    I got error.
    SVRMGRL>connect internal
    SVRMGRL>show parameter db_name
    SVRMGRL>recover standby databse
    After above step I got the Error:
    ORA-00283 recovery session canceled due to error
    ORA-01157 can not identify/lock datafile 24 -see DBWR trace file
    ORA-01110 data file 24: '/location of .dbf file on standby databse disk'
    Please let me know in detail because I am new in this field.
    Thanks in advance

    You will have the datafile information on the standby alert log.
    Something like '/u01/app/oracle/product/8174/db/<filename>.dbf'.
    1. connect as sysdba on standby database.
    2. alter database create datafile 'Production datafile name' as 'alert log filename';
    Example :
    alter database create datafile '/u01/data/user1.dbf'
    as '/u01/app/oracle/product/8174/db/<filename>.dbf';
    3. Recovery managed standby database;
    HTH.
    Regards,
    Arun

  • Selective XML Index feature is not supported for the current database version , SQL Server Extended Events , Optimizing Reading from XML column datatype

    Team , Thanks for looking into this  ..
    As a last resort on  optimizing my stored procedure ( Below ) i wanted to create a Selective XML index  ( Normal XML indexes doesn't seem to be improving performance as needed ) but i keep getting this error within my stored proc . Selective XML
    Index feature is not supported for the current database version.. How ever
    EXECUTE sys.sp_db_selective_xml_index; return 1 , stating Selective XML Indexes are enabled on my current database .
    Is there ANY alternative way i can optimize below stored proc ?
    Thanks in advance for your response(s) !
    /****** Object: StoredProcedure [dbo].[MN_Process_DDLSchema_Changes] Script Date: 3/11/2015 3:10:42 PM ******/
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    -- EXEC [dbo].[MN_Process_DDLSchema_Changes]
    ALTER PROCEDURE [dbo].[MN_Process_DDLSchema_Changes]
    AS
    BEGIN
    SET NOCOUNT ON --Does'nt have impact ( May be this wont on SQL Server Extended events session's being created on Server(s) , DB's )
    SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
    select getdate() as getdate_0
    DECLARE @XML XML , @Prev_Insertion_time DATETIME
    -- Staging Previous Load time for filtering purpose ( Performance optimize while on insert )
    SET @Prev_Insertion_time = (SELECT MAX(EE_Time_Stamp) FROM dbo.MN_DDLSchema_Changes_log ) -- Perf Optimize
    -- PRINT '1'
    CREATE TABLE #Temp
    EventName VARCHAR(100),
    Time_Stamp_EE DATETIME,
    ObjectName VARCHAR(100),
    ObjectType VARCHAR(100),
    DbName VARCHAR(100),
    ddl_Phase VARCHAR(50),
    ClientAppName VARCHAR(2000),
    ClientHostName VARCHAR(100),
    server_instance_name VARCHAR(100),
    ServerPrincipalName VARCHAR(100),
    nt_username varchar(100),
    SqlText NVARCHAR(MAX)
    CREATE TABLE #XML_Hold
    ID INT NOT NULL IDENTITY(1,1) PRIMARY KEY , -- PK necessity for Indexing on XML Col
    BufferXml XML
    select getdate() as getdate_01
    INSERT INTO #XML_Hold (BufferXml)
    SELECT
    CAST(target_data AS XML) AS BufferXml -- Buffer Storage from SQL Extended Event(s) , Looks like there is a limitation with xml size ?? Need to re-search .
    FROM sys.dm_xe_session_targets xet
    INNER JOIN sys.dm_xe_sessions xes
    ON xes.address = xet.event_session_address
    WHERE xes.name = 'Capture DDL Schema Changes' --Ryelugu : 03/05/2015 Session being created withing SQL Server Extended Events
    --RETURN
    --SELECT * FROM #XML_Hold
    select getdate() as getdate_1
    -- 03/10/2015 RYelugu : Error while creating XML Index : Selective XML Index feature is not supported for the current database version
    CREATE SELECTIVE XML INDEX SXI_TimeStamp ON #XML_Hold(BufferXml)
    FOR
    PathTimeStamp ='/RingBufferTarget/event/timestamp' AS XQUERY 'node()'
    --RETURN
    --CREATE PRIMARY XML INDEX [IX_XML_Hold] ON #XML_Hold(BufferXml) -- Ryelugu 03/09/2015 - Primary Index
    --SELECT GETDATE() AS GETDATE_2
    -- RYelugu 03/10/2015 -Creating secondary XML index doesnt make significant improvement at Query Optimizer , Instead creation takes more time , Only primary should be good here
    --CREATE XML INDEX [IX_XML_Hold_values] ON #XML_Hold(BufferXml) -- Ryelugu 03/09/2015 - Primary Index , --There should exists a Primary for a secondary creation
    --USING XML INDEX [IX_XML_Hold]
    ---- FOR VALUE
    -- --FOR PROPERTY
    -- FOR PATH
    --SELECT GETDATE() AS GETDATE_3
    --PRINT '2'
    -- RETURN
    SELECT GETDATE() GETDATE_3
    INSERT INTO #Temp
    EventName ,
    Time_Stamp_EE ,
    ObjectName ,
    ObjectType,
    DbName ,
    ddl_Phase ,
    ClientAppName ,
    ClientHostName,
    server_instance_name,
    nt_username,
    ServerPrincipalName ,
    SqlText
    SELECT
    p.q.value('@name[1]','varchar(100)') AS eventname,
    p.q.value('@timestamp[1]','datetime') AS timestampvalue,
    p.q.value('(./data[@name="object_name"]/value)[1]','varchar(100)') AS objectname,
    p.q.value('(./data[@name="object_type"]/text)[1]','varchar(100)') AS ObjectType,
    p.q.value('(./action[@name="database_name"]/value)[1]','varchar(100)') AS databasename,
    p.q.value('(./data[@name="ddl_phase"]/text)[1]','varchar(100)') AS ddl_phase,
    p.q.value('(./action[@name="client_app_name"]/value)[1]','varchar(100)') AS clientappname,
    p.q.value('(./action[@name="client_hostname"]/value)[1]','varchar(100)') AS clienthostname,
    p.q.value('(./action[@name="server_instance_name"]/value)[1]','varchar(100)') AS server_instance_name,
    p.q.value('(./action[@name="nt_username"]/value)[1]','varchar(100)') AS nt_username,
    p.q.value('(./action[@name="server_principal_name"]/value)[1]','varchar(100)') AS serverprincipalname,
    p.q.value('(./action[@name="sql_text"]/value)[1]','Nvarchar(max)') AS sqltext
    FROM #XML_Hold
    CROSS APPLY BufferXml.nodes('/RingBufferTarget/event')p(q)
    WHERE -- Ryelugu 03/05/2015 - Perf Optimize - Filtering the Buffered XML so as not to lookup at previoulsy loaded records into stage table
    p.q.value('@timestamp[1]','datetime') >= ISNULL(@Prev_Insertion_time ,p.q.value('@timestamp[1]','datetime'))
    AND p.q.value('(./data[@name="ddl_phase"]/text)[1]','varchar(100)') ='Commit' --Ryelugu 03/06/2015 - Every Event records a begin version and a commit version into Buffer ( XML ) we need the committed version
    AND p.q.value('(./data[@name="object_type"]/text)[1]','varchar(100)') <> 'STATISTICS' --Ryelugu 03/06/2015 - May be SQL Server Internally Creates Statistics for #Temp tables , we do not want Creation of STATISTICS Statement to be logged
    AND p.q.value('(./data[@name="object_name"]/value)[1]','varchar(100)') NOT LIKE '%#%' -- Any stored proc which creates a temp table within it Extended Event does capture this creation statement SQL as well , we dont need it though
    AND p.q.value('(./action[@name="client_app_name"]/value)[1]','varchar(100)') <> 'Replication Monitor' --Ryelugu : 03/09/2015 We do not want any records being caprutred by Replication Monitor ??
    SELECT GETDATE() GETDATE_4
    -- SELECT * FROM #TEMP
    -- SELECT COUNT(*) FROM #TEMP
    -- SELECT GETDATE()
    -- RETURN
    -- PRINT '3'
    --RETURN
    INSERT INTO [dbo].[MN_DDLSchema_Changes_log]
    [UserName]
    ,[DbName]
    ,[ObjectName]
    ,[client_app_name]
    ,[ClientHostName]
    ,[ServerName]
    ,[SQL_TEXT]
    ,[EE_Time_Stamp]
    ,[Event_Name]
    SELECT
    CASE WHEN T.nt_username IS NULL OR LEN(T.nt_username) = 0 THEN t.ServerPrincipalName
    ELSE T.nt_username
    END
    ,T.DbName
    ,T.objectname
    ,T.clientappname
    ,t.ClientHostName
    ,T.server_instance_name
    ,T.sqltext
    ,T.Time_Stamp_EE
    ,T.eventname
    FROM
    #TEMP T
    /** -- RYelugu 03/06/2015 - Filters are now being applied directly while retrieving records from BUFFER or on XML
    -- Ryelugu 03/15/2015 - More filters are likely to be added on further testing
    WHERE ddl_Phase ='Commit'
    AND ObjectType <> 'STATISTICS' --Ryelugu 03/06/2015 - May be SQL Server Internally Creates Statistics for #Temp tables , we do not want Creation of STATISTICS Statement to be logged
    AND ObjectName NOT LIKE '%#%' -- Any stored proc which creates a temp table within it Extended Event does capture this creation statement SQL as well , we dont need it though
    AND T.Time_Stamp_EE >= @Prev_Insertion_time --Ryelugu 03/05/2015 - Performance Optimize
    AND NOT EXISTS ( SELECT 1 FROM [dbo].[MN_DDLSchema_Changes_log] MN
    WHERE MN.[ServerName] = T.server_instance_name -- Ryelugu Server Name needes to be added on to to xml ( Events in session )
    AND MN.[DbName] = T.DbName
    AND MN.[Event_Name] = T.EventName
    AND MN.[ObjectName]= T.ObjectName
    AND MN.[EE_Time_Stamp] = T.Time_Stamp_EE
    AND MN.[SQL_TEXT] =T.SqlText -- Ryelugu 03/05/2015 This is a comparision Metric as well , But needs to decide on
    -- Peformance Factor here , Will take advise from Lance if comparision on varchar(max) is a vital idea
    --SELECT GETDATE()
    --PRINT '4'
    --RETURN
    SELECT
    top 100
    [EE_Time_Stamp]
    ,[ServerName]
    ,[DbName]
    ,[Event_Name]
    ,[ObjectName]
    ,[UserName]
    ,[SQL_TEXT]
    ,[client_app_name]
    ,[Created_Date]
    ,[ClientHostName]
    FROM
    [dbo].[MN_DDLSchema_Changes_log]
    ORDER BY [EE_Time_Stamp] desc
    -- select getdate()
    -- ** DELETE EVENTS after logging into Physical table
    -- NEED TO Identify if this @XML can be updated into physical system table such that previously loaded events are left untoched
    -- SET @XML.modify('delete /event/class/.[@timestamp="2015-03-06T13:01:19.020Z"]')
    -- SELECT @XML
    SELECT GETDATE() GETDATE_5
    END
    GO
    Rajkumar Yelugu

    @@Version : ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    Microsoft SQL Server 2012 - 11.0.5058.0 (X64)
        May 14 2014 18:34:29
        Copyright (c) Microsoft Corporation
        Developer Edition (64-bit) on Windows NT 6.2 <X64> (Build 9200: ) (Hypervisor)
    (1 row(s) affected)
    Compatibility level is set to 110 .
    One of the limitation states - XML columns with a depth of more than 128 nested nodes
    How do i verify this ? Thanks .
    Rajkumar Yelugu

  • Orcl:query-database gives error when using to_char function in select stmt

    hi
    Use Case : We get a csv file ("bank_import_<MMDDYYYYY>.csv") from the bank containing the transactions occured for the month. The date in the filename is retrieved into string and i need to convert this string to the format "MON-DD-YYYY". This is the required format for an header table which takes this string as primary key.
    Code:
    statement_name = '11302206'.........
    <copy>
    <from expression="concat("'select to_char(to_date('",bpws:getVariableData('statement_name') ,"','MMDDYYYY'),'MON-DD-YYYY') from dual'")"/>
    <to variable="xpath"/>
    </copy>
    <copy>
    <from expression="orcl:query-database(bpws:getVariableData('xpath'),false(),false(),'jdbc:oracle:thin:apps/apps@croaker:1529:RSICMI')"/>
    <to variable="statement_name"/>
    </copy>
    Error:
    [2006/12/06 19:13:04] Updated variable "xpath" less
    <xpath>'select to_char(to_date('10302006','MMDDYYYY'),'MON-DD-YYYY') from dual'</xpath>
    [2006/12/06 19:13:04] "XPathException" has been thrown. less
    XPath expression failed to execute.
    Error while processing xpath expression, the expression is "orcl:query-database(bpws:getVariableData("xpath"), false(), false(), "jdbc:oracle:thin:apps/apps@croaker:1529:RSICMI")", the reason is .
    Please verify the xpath query.
    Log Message:
    <2006-12-06 19:13:04,595> <DEBUG> <UAT.collaxa.cube.xml> <XPathUtil::evaluate> XPathQuery[concat("'select to_char(to_date('", bpws:getVariableData("statement_name"), "','MMDDYYYY'),'MON-DD-YYYY') from dual'")], XPath Result: class=java.lang.String value='select to_char(to_date('10302006','MMDDYYYY'),'MON-DD-YYYY') from dual'
    <2006-12-06 19:13:04,595> <DEBUG> <UAT.collaxa.cube.xml> <XPathUtil::initXPath> namespaceMapping is: rootMap: {bpws=http://schemas.xmlsoap.org/ws/2003/03/business-process/, xp20=http://www.oracle.com/XSL/Transform/java/oracle.tip.pc.services.functions.Xpath20, ns4=http://xmlns.oracle.com/pcbpel/adapter/db/top/BAIBankUpload, ldap=http://schemas.oracle.com/xpath/extension/ldap, xsd=http://www.w3.org/2001/XMLSchema, ns5=http://xmlns.oracle.com/pcbpel/adapter/file/, client=http://xmlns.oracle.com/BAI_BankUpload, ora=http://schemas.oracle.com/xpath/extension, ns1=http://xmlns.oracle.com/pcbpel/adapter/file/readBAIBankImportCSV/, ns3=http://TargetNamespace.com/readBAIBankImportCSV, ns2=http://xmlns.oracle.com/pcbpel/adapter/db/Insert_SI_CE_STATEMENT_LINES_INT/, bpelx=http://schemas.oracle.com/bpel/extension, orcl=http://www.oracle.com/XSL/Transform/java/oracle.tip.pc.services.functions.ExtFunc, =http://schemas.xmlsoap.org/ws/2003/03/business-process/}
    scopedMap: {}
    <2006-12-06 19:13:04,751> <DEBUG> <UAT.collaxa.cube.xml> <XPathUtil::evaluate> XPathQuery :orcl:query-database(bpws:getVariableData("xpath"), false(), false(), "jdbc:oracle:thin:apps/apps@croakercom:1529:RSICMI")
    org.collaxa.thirdparty.jaxen.FunctionCallException
         at org.collaxa.thirdparty.jaxen.FunctionCallException.fillInStackTrace(FunctionCallException.java:124)
         at java.lang.Throwable.<init>(Throwable.java:195)
         at java.lang.Exception.<init>(Exception.java:41)
         at org.collaxa.thirdparty.jaxen.saxpath.SAXPathException.<init>(SAXPathException.java:83)
         at org.collaxa.thirdparty.jaxen.JaxenException.<init>(JaxenException.java:82)
         at org.collaxa.thirdparty.jaxen.FunctionCallException.<init>(FunctionCallException.java:86)
         at oracle.tip.pc.services.functions.ExtFuncFunction$QueryDatabaseFunction.call(ExtFuncFunction.java:190)
         at org.collaxa.thirdparty.jaxen.expr.DefaultFunctionCallExpr.evaluate(DefaultFunctionCallExpr.java:184)
         at org.collaxa.thirdparty.jaxen.expr.DefaultXPathExpr.asList(DefaultXPathExpr.java:107)
         at org.collaxa.thirdparty.jaxen.BaseXPath.selectNodesForContext(BaseXPath.java:724)
         at org.collaxa.thirdparty.jaxen.BaseXPath.selectNodes(BaseXPath.java:253)
         at org.collaxa.thirdparty.jaxen.BaseXPath.evaluate(BaseXPath.java:210)
         at com.collaxa.cube.xml.xpath.XPathUtil.evaluate(XPathUtil.java:93)
         at com.collaxa.cube.engine.ext.wmp.BPELAssignWMP.evalFromValue(BPELAssignWMP.java:501)
         at com.collaxa.cube.engine.ext.wmp.BPELAssignWMP.__executeStatements(BPELAssignWMP.java:122)
         at com.collaxa.cube.engine.ext.wmp.BPELActivityWMP.perform(BPELActivityWMP.java:188)
         at com.collaxa.cube.engine.CubeEngine.performActivity(CubeEngine.java:3408)
         at com.collaxa.cube.engine.CubeEngine.handleWorkItem(CubeEngine.java:1836)
         at com.collaxa.cube.engine.dispatch.message.instance.PerformMessageHandler.handleLocal(PerformMessageHandler.java:75)
         at com.collaxa.cube.engine.dispatch.DispatchHelper.handleLocalMessage(DispatchHelper.java:166)
         at com.collaxa.cube.engine.dispatch.DispatchHelper.sendMemory(DispatchHelper.java:252)
         at com.collaxa.cube.engine.CubeEngine.endRequest(CubeEngine.java:5438)
         at com.collaxa.cube.engine.CubeEngine.createAndInvoke(CubeEngine.java:1217)
         at com.collaxa.cube.engine.delivery.DeliveryService.handleInvoke(DeliveryService.java:511)
         at com.collaxa.cube.engine.ejb.impl.CubeDeliveryBean.handleInvoke(CubeDeliveryBean.java:335)
         at ICubeDeliveryLocalBean_StatelessSessionBeanWrapper16.handleInvoke(ICubeDeliveryLocalBean_StatelessSessionBeanWrapper16.java:1796)
         at com.collaxa.cube.engine.dispatch.message.invoke.InvokeInstanceMessageHandler.handle(InvokeInstanceMessageHandler.java:37)
         at com.collaxa.cube.engine.dispatch.DispatchHelper.handleMessage(DispatchHelper.java:125)
         at com.collaxa.cube.engine.dispatch.BaseScheduledWorker.process(BaseScheduledWorker.java:70)
         at com.collaxa.cube.engine.ejb.impl.WorkerBean.onMessage(WorkerBean.java:86)
         at com.evermind.server.ejb.MessageDrivenBeanInvocation.run(MessageDrivenBeanInvocation.java:123)
         at com.evermind.server.ejb.MessageDrivenHome.onMessage(MessageDrivenHome.java:755)
         at com.evermind.server.ejb.MessageDrivenHome.run(MessageDrivenHome.java:928)
         at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:186)
         at java.lang.Thread.run(Thread.java:534)
    Root cause:
    java.lang.ClassCastException
         at oracle.tip.pc.services.functions.ExtFuncFunction$QueryDatabaseFunction.call(ExtFuncFunction.java:158)
         at org.collaxa.thirdparty.jaxen.expr.DefaultFunctionCallExpr.evaluate(DefaultFunctionCallExpr.java:184)
         at org.collaxa.thirdparty.jaxen.expr.DefaultXPathExpr.asList(DefaultXPathExpr.java:107)
         at org.collaxa.thirdparty.jaxen.BaseXPath.selectNodesForContext(BaseXPath.java:724)
         at org.collaxa.thirdparty.jaxen.BaseXPath.selectNodes(BaseXPath.java:253)
         at org.collaxa.thirdparty.jaxen.BaseXPath.evaluate(BaseXPath.java:210)
         at com.collaxa.cube.xml.xpath.XPathUtil.evaluate(XPathUtil.java:93)
         at com.collaxa.cube.engine.ext.wmp.BPELAssignWMP.evalFromValue(BPELAssignWMP.java:501)
         at com.collaxa.cube.engine.ext.wmp.BPELAssignWMP.__executeStatements(BPELAssignWMP.java:122)
         at com.collaxa.cube.engine.ext.wmp.BPELActivityWMP.perform(BPELActivityWMP.java:188)
         at com.collaxa.cube.engine.CubeEngine.performActivity(CubeEngine.java:3408)
         at com.collaxa.cube.engine.CubeEngine.handleWorkItem(CubeEngine.java:1836)
         at com.collaxa.cube.engine.dispatch.message.instance.PerformMessageHandler.handleLocal(PerformMessageHandler.java:75)
         at com.collaxa.cube.engine.dispatch.DispatchHelper.handleLocalMessage(DispatchHelper.java:166)
         at com.collaxa.cube.engine.dispatch.DispatchHelper.sendMemory(DispatchHelper.java:252)
         at com.collaxa.cube.engine.CubeEngine.endRequest(CubeEngine.java:5438)
         at com.collaxa.cube.engine.CubeEngine.createAndInvoke(CubeEngine.java:1217)
         at com.collaxa.cube.engine.delivery.DeliveryService.handleInvoke(DeliveryService.java:511)
         at com.collaxa.cube.engine.ejb.impl.CubeDeliveryBean.handleInvoke(CubeDeliveryBean.java:335)
         at ICubeDeliveryLocalBean_StatelessSessionBeanWrapper16.handleInvoke(ICubeDeliveryLocalBean_StatelessSessionBeanWrapper16.java:1796)
         at com.collaxa.cube.engine.dispatch.message.invoke.InvokeInstanceMessageHandler.handle(InvokeInstanceMessageHandler.java:37)
         at com.collaxa.cube.engine.dispatch.DispatchHelper.handleMessage(DispatchHelper.java:125)
         at com.collaxa.cube.engine.dispatch.BaseScheduledWorker.process(BaseScheduledWorker.java:70)
         at com.collaxa.cube.engine.ejb.impl.WorkerBean.onMessage(WorkerBean.java:86)
         at com.evermind.server.ejb.MessageDrivenBeanInvocation.run(MessageDrivenBeanInvocation.java:123)
         at com.evermind.server.ejb.MessageDrivenHome.onMessage(MessageDrivenHome.java:755)
         at com.evermind.server.ejb.MessageDrivenHome.run(MessageDrivenHome.java:928)
         at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:186)
         at java.lang.Thread.run(Thread.java:534)
    <2006-12-06 19:13:04,751> <ERROR> <UAT.collaxa.cube.xml> ORABPEL-09500
    XPath expression failed to execute.
    Error while processing xpath expression, the expression is "orcl:query-database(bpws:getVariableData("xpath"), false(), false(), "jdbc:oracle:thin:apps/apps@croaker:1529:RSICMI")", the reason is .
    Please verify the xpath query.

    Hi,
    QAbdul wrote:
    when I tried to execute the followingin XMLQuery by calling TO_CHAR() whithin this query I am getting this error"ORA-19237: XP0017 - unable to resolve call to function - fn:TO_CHARTO_CHAR is a SQL function, XQuery is unaware of it.
    XPath 2.0 specifications define a fn:format-date function but Oracle has not included yet in its XQuery implementation.
    Easiest way to go is A_Non's solution, but if you need to format at multiple places in the query, you can declare a local XQuery function.
    For example, to format to "DD/MM/YYYY" from the canonical xs:date format "YYYY-MM-DD" :
    {code}
    declare function local:format-date($d as xs:date) as xs:string
    let $s := xs:string($d)
    return concat(
    substring($s, 10, 2), "/",
    substring($s, 7, 2), "/",
    substring($s, 2, 4)
    {code}
    and an example of use :
    {code}
    SQL> CREATE TABLE test_xqdate AS SELECT sysdate dt FROM dual;
    Table created
    SQL> SELECT *
    2 FROM XMLTable(
    3 'declare function local:format-date($d as xs:date) as xs:string
    4 {
    5 let $s := xs:string($d)
    6 return concat(
    7 substring($s, 10, 2), "/",
    8 substring($s, 7, 2), "/",
    9 substring($s, 2, 4)
    10 )
    11 }; (: :)
    12 for $i in ora:view("TEST_XQDATE")/ROW/DT
    13 return element e {
    14 attribute xs_date_format { $i/text() },
    15 attribute local_format { local:format-date($i) }
    16 }'
    17 COLUMNS
    18 xs_date_format VARCHAR2(10) PATH '@xs_date_format',
    19 local_format VARCHAR2(10) PATH '@local_format'
    20 )
    21 ;
    XS_DATE_FORMAT LOCAL_FORMAT
    2010-10-28 28/10/2010
    {code}

  • Problem in using query-database() function in Transformation

    Hi All,
    I am using JDev and SOA 10.1.3.4.
    I have an async process.
    In that I am doing a transformation in which source is InputVariable and target is result.
    In Transformation I am using query-database function to fetch a record from DB and am assigning that to result.
    *<xsl:value-of select='orcl:query-database(concat("select ename from emp where empid=",/ns1:DBXSLProcessRequest/ns1:input),false(),false(),"jdbc:oracle:thin:scott/tiger@localhost:1521:abcd")'/>*
    I am not getting any error but query-database is not returning any thing I maen it is returning null.
    If i run that query in SQL prompt it is returning the empname.
    Please help me out.
    Regards
    PavanKumar.M

    Hi Pavan
    I tried following in BPEL transform actvity's XSL file, and it works fine and returns the sysdate for me.
    <xsl:value-of select='orcl:query-database("select sysdate from dual",false,false,"jdbc/myDS")'/>
    Can you try above in new XSL file of Transform activity and let me know its gives you the result? if it works you can start making changes according to your requirement.
    steps to follow:
    1.create connection pool in EM, and make sure its working fine by using test on connection pool.
    2.create Datasource and point to above created connection pool. and restart the oc4j_soa
    3.rename the xsl file name of transform activity in BPEL and use above query-database function.some times xsl files are cached even if you make changes it will not take effect so for testing if you rename it will be good.
    Thanks
    Seshagiri.Rayala
    http://soabpel.wordpress.com/

  • Use of sql group function in orcl:query-database - urgent

    All,
    Version: 10.1.3.4
    Two requirements for me:
    1. I want to use sum function in orcl:query-database. How to use it?
    For ex: I tried the following
    <xsl:value-of select='orcl:query-database("select sum(salary) from emp",false(),false(),"jdbc/DB1")'/>
    I got the following error
    oracle.xml.sql.OracleXMLSQLException: Character ')' is not allowed in an XML tag name.
    When I tried without sum function, it works fine
    2. I used the same table, but without the sum function as below
    <xsl:value-of select='orcl:query-database("select salary from emp",false(),false(),"jdbc/DB1")'/>
    This time, it returns the first employees salary!! I dont understand this logic. I expected either the query returns all the rows else it throws error, but none of them were true!
    Can anyone pls explain what is the behavior? I want your reply for both the queries!
    Currently I'm in a project where I'm working in the similar scenario, so please guru's let me know ASAP.
    Thanks,
    Sen

    Hi Sen,
    Create a normal variable variable in XSLT.
    Then use that variable.
    I mean variable name='Var_1' select='Your Query'
    Now use Var_1/yourcollection/...
    I am giving some example
    <xsl:variable name="Stopdetails" select="/ns0:MyEBM/ns0:DataArea/ns0:MyEBO/ns0:Stops/ns0:Stop[ns0:StopID=$TempStopId]"/>
                <xsl:variable name="AccStopTypeVar">
                        <xsl:value-of select="$Stopdetails/ns0:StopType"/>
    </xsl:variable>If you want you can loop on the created variable.
    You can do these in XSL.
    But DB calls those things ...it would be better , if u take them out. Spme debugging problems are there with XSL.
    Regards
    PavanKumar.M

  • Query Database - Write to Database

    I am querying a SQL DB which has 10 results in total with 15 fields each. The run book will filter out 2 eventually based on some rules.
    Once all the activities are complete as per the runbook, I want to write back to the database as 'Complete' under Field called Status
    The write works, however it creates a new item completely. I want to update the existing request
    Regards, Vik Singh "If this thread answered your question, please click on "Mark as Answer"

    So if I understand correctly you are querying a database and based on the response you do some activities and then you want to update the records you queried in order to indicate that the activities have been completed?
    If you just want to update the existing records then I would probably just do an Update query based on the primary-key of the record you queried using the "Query Database" activity instead of the "Write to Database" (which always writes
    a new record).
    Something like: Update [Table Name] set Status = 'Completed' where PrimaryKey = 'xxx'
    You will probably need to cut up the Query Result from your first query first in order to get the Primary Key.
    Usually I do this by reading Published Data "Full Line as String Seperated by ;" with a PowerShell Script (Run .NET Activity).
    For example:
    $Query = <Full Line as String >   //Published Data from the Query activity
    $All_Records = $Query.Split(";") //this creates an array where every record is a column from the query result
    $PrimaryKey = $All_Records[0]  (if the primarykey is found in the first record of the array of course)
    Then use that $PrimaryKey value in your new query.

  • Problem in using oraext:query-database() command in xslt

    Hi,
    I am querying a function through the query-database() command in a certain xslt within a BPEL Process as below:
    <xsl:variable name="Corporation">
                    <xsl:value-of select="string('CORPORATION')"/>
      </xsl:variable>
    <xsl:variable name="CustAccIdFrmDB">
                    <xsl:value-of select='orext:query-database(concat("select xx_egytrans_integration.get_xid(",$getCustAccID,",",$Corporation,") from dual"),false(),false(),string("jdbc/otmdatasource"))'/>
    </xsl:variable>
    But, after deploying the BPEL Process and while running it, I am getting error at run-time.
    Below is the error,
    <bpelFault><faultType>0</faultType><subLanguageExecutionFault xmlns="http://schemas.oracle.com/bpel/extension"><part name="summary"><summary>XPath expression failed to execute. An error occurs while processing the XPath expression; the expression is ora:doXSLTransformForDoc('xsl/Transformation__CustomerUpdate.xsl', $Invoke_getProfileOptions_getProfileOptions_OutputVariable.getProfileOptionsOutputCollection, 'Invoke_CustomerUpdateProc_Customer_UpdateProc_OutputVariable.OutputParameters', $Invoke_CustomerUpdateProc_Customer_UpdateProc_OutputVariable.OutputParameters, 'Invoke_CustAccProc_getCustAccID_OutputVariable_1.OutputParameters', $Invoke_CustAccProc_getCustAccID_OutputVariable_1.OutputParameters, 'Invoke_GetCustTaxExempt_getCustTaxExempStatus__Update_OutputVariable.OutputParameters', $Invoke_GetCustTaxExempt_getCustTaxExempStatus__Update_OutputVariable.OutputParameters, 'Invoke_getCustContactUpd_get_cust_acc_site_contactUpd_OutputVariable.OutputParameters', $Invoke_getCustContactUpd_get_cust_acc_site_contactUpd_OutputVariable.OutputParameters). The XPath expression failed to execute; the reason was: javax.xml.transform.TransformerException: XML-22900: (Fatal Error) An internal error condition occurred.. Check the detailed root cause described in the exception message text and verify that the XPath query is correct. </summary></part><part name="code"><code>XPathExecutionError</code></part></subLanguageExecutionFault></bpelFault>
    Could you please let me know the probable reason for the error? Please help at the earliest.
    Thanks,
    Promit

    Hi Anshul,
    Thanks for replying.
    The query upon executing in SQL Plus returns proper value. The result is either a concatenated string or 'NO_DATA'.
    Please note that: $getCustAccID --> x-path expression and $Corporation --> hardcoded value - 'CORPORATION'.
    I have tried formatting the string of the query in various ways, but no luck.. In all cases the error is same.
    Waiting for your reply.
    Thanks in advance..
    Regards,
    Promit

  • Appropriate index for a query with 3 ands

    Hi,
    I have the following table:
    PAGES
    BOOK_ID DIMENSION BEGIN_POS END_POS PAGE_TEXT
    where book_id is 64-bits, dimension is 32-bits, begin_pos is 32-bits, end_pos is 32-bits, and page_text is a blob (UTF-8)
    My query is:
    select * from PAGES
    where BOOK_ID = 1234 and DIMENSION = -2 and BEGIN_POS <= 10000 and 10000 < END_POS
    Which is sure to return 1 result.
    My question is what kind of (multi-column?) index or indices do I need for this query whose WHERE clause uses 4 columns?
    Thanks,
    Andy

    book_id is 64-bits, dimension is 32-bits, begin_pos is 32-bits, end_pos is 32-bitsI'm sorry, I don't understand how a column could be 64-bits an other one 32-bits, the wordsize is for all the database. Well, I maybe miss the point.
    My question is what kind of (multi-column?) index or indices do I need for this query whose WHERE clause uses 4 columns?That really depend, but you should not base your index creation on one query, but more than that. Maybe book_id is a unique key, then have a PK on it could be nice. For the other columns, it is difficult, they don't mean nothing for me.
    Nicolas.

  • Orcl:query-database ERROR

    I am getting following error when I use orcl:query-database() function. JNDI name is correct and working for DB adapters. I dont knwo whats wrong with this..
    <subLanguageExecutionFault xmlns="http://schemas.oracle.com/bpel/extension"><part name="code"><code>XPathExecutionError</code>
    </part><part name="summary"><summary>XPath expression failed to execute.
    Error while processing xpath expression, the expression is "orcl:query-database('select testan_s.nextval from dual',false(),false(),'eis/DB/TEST)", the reason is message can't be null.
    Please verify the xpath query.
    </summary>
    </part></subLanguageExecutionFault>

    Using JNDI name gives this freak error. If you use jdbc url string then it works in the last parameter.
    It is really sad that these kind of simple functions are not working even in 10.1.3.3 which makes it very difficult to design the process in a simple way.
    When will these will be fixed? is there any workaround? I need to use this in XSL transformations. any one has a clue?

  • Transformation not working on result  from oraext:query-database() functio

    Step 1: Create a project in Jdeveloper (File->New->Project->SOA Project)
    Step 2: Select synchronous process
    Step 3: Take the default schema for input and output
    Step 4: create a schema to hold multiple simple type string values.
    Step 5: Import that xsd file in the project wsdl file.
    Step6: Create a another variable (var1) referring the above element type (result)
    Step7: Drop an assign activity from component palette to the process between Receive and Replay activity
    Step 8: In the from section use the function oraext:query-database("select ename from emp",true(),true(),'jdbc/EBS_database').in the to section select the var1 variable.
    Step9: Drop another assign activity from component palette to the process before Replay activity and after the first assign activity
    Step 10: In from section select the first ename using xpath from var variable and assign to the output variable.
    Error: The value is not coming to the output from source(var) variable using xpath or transformation.

    See the audit trail on EM and verify why it is not working.
    Regards,
    Anuj

Maybe you are looking for