Alternate for oracle statement in sybase

What is the alternate for the following oracle statement in sybase:
insert /*+ APPEND */ into MYTABLE nologging
I need to bulk insert into MYTABLE (in sybase) reading from a CSV file or another database. More than 0.4 million records.
I am using Java as a programming language.

let me get this straight:
1) you're asking a SYBASE question
2) and using Java as your language
And this is
1) Oracle forum
2) SQL and PL/SQL is the language here
I think you're asking your question in the wrong place.

Similar Messages

  • Alternate for tables statement

    hi friends
    If i use dictionary fields in the screen i am using tables statement of that corresponding structure.
       but i think tables statement is obsolete one is it right so what is the alternative for tables statement
    how to move the values of fields between screen and abap program, help me
               thanks in advance

    Fields are moved from your program to the screen when the field names match.  There is no explicit transport command needed.
    I do not think that the TABLES command is considered obsolete when used in screen programming.  That is the only time that I use it.  There was some training material in regards to the TABLES command being required for proper transport between the program and the screen.  I do not know if this has changed with 4.7.

  • What is the Alternate for AMPERSAND (&) in oracle???????????

    Hi All,
    This is my query......
    select * from
    select 'abc' a from dual union
    select 'def' a from dual union
    select 'abc & xyz' a from dual
    ) where a='abc & xyz';
    Output should be 'abc & xyz'.
    Since I am using '&' it will prompt to enter some value from my keyboard.
    Yes, I know, I will get the output either by using set scan off,set define off, set escape on.
    what is the Alternate for AMPERSAND (&) in oracle???????????
    The value 'abc & xyz' will be passed from the front end and will be passed to the select statement..........
    Regards,
    Kashyap Varma N

    nkvkashyap wrote:
    Hi All,
    This is my query......
    select * from
    select 'abc' a from dual union
    select 'def' a from dual union
    select 'abc & xyz' a from dual
    ) where a='abc & xyz';
    Output should be 'abc & xyz'.
    Since I am using '&' it will prompt to enter some value from my keyboard.
    Yes, I know, I will get the output either by using set scan off,set define off, set escape on.
    what is the Alternate for AMPERSAND (&) in oracle???????????
    The value 'abc & xyz' will be passed from the front end and will be passed to the select statement..........
    Regards,
    Kashyap Varma NI think you are confused. Oracle (i.e. the DB) doesn't care about &. However front-ends like SQL*Plus do: and prompt you for the substitution (unless set define off etc.).
    If you are using Java, then pass strings with & with no problem.

  • KM for Bulk loading from Sybase to Oracle

    Is there KM available for Bulk loading from Sybase to Oracle ?
    May be using Unix pipe, sybase fetch and Direct sqlloader.
    Anyone has some thoughts on this, appreciate your responses.

    Sample CTL generated by ODI.
    OPTIONS (
    ERRORS=0,
    DIRECT=TRUE
    LOAD DATA
    INFILE "/exp_imp/ODI_C_0ODI_TEST.bcp"
    BADFILE "/exp_imp/ODI_C_0ODI_TEST.bad"
    DISCARDFILE "/exp_imp/ODI_C_0ODI_TEST.dsc"
    DISCARDMAX 1
    INTO TABLE ODISYB_TEST.ODI_C_0ODI_TEST
    FIELDS TERMINATED BY 'M-,'
    C1_TEST_NO,
    C2_TEST_DESC,
    C3_TEST_TOKEN,
    C4_TEST_DATE
    Error on SQLLoader log file.
    Record 1: Rejected - Error on table ODISYB_TEST.ODI_C_0ODI_TEST, column C4_TEST_DATE.
    ORA-01858: a non-numeric character was found where a numeric was expected

  • Alternate for union all oracle sql

    hi
    is there any alternate for union all ? becaz its verly slow...
    Regards
    raj

    CharlesRoos wrote:
    I think you can create a materialized view over the union query, and then index that view and put the view refresh rate to 1 second or something. Theoretically sounds like it is doable.my car engine is mis-firing and making my car slow, shall I attach a propeller to the boot of the car to push it allong a bit?
    better to fix the poorly performing engine

  • Statement of Directions for Oracle Forms

    there are two SoD for Forms available at http://www.oracle.com/technology/products/forms/index.html:
    2005: Strategy for Oracle Forms:
    http://www.oracle.com/technology/products/forms/pdf/10g/ToolsSOD.pdf
    2007: Forms 10g Client Platform Support:
    http://www.oracle.com/technology/products/forms/htdocs/10gr2/clientsod_forms10gr2.html
    do we get a new SoD-Strategy when Forms 11g releases?

    There are none planned but that might change.
    I am planning to make the client SOD version independent. That is, drop the 10gr2 in the name.

  • Error BACKINT for Oracle Connection

    hi @ maxdb gurus...
    i have a problem backing up my maxdb 7.6.0.033 (serving for a mysap.erp2005 on aix 5.3) using the backint mechanism.
    -> backup media created: two pipes and one parallel medium containing these pipes
    -> bsi.env has been created
    -> backint4sapdb.sar contains
       4 staging areas each with 4096 MB
       files per backint call 2
    -> maxdb is about 57 gb perm. data area (and 6 gb temp.)
    i use the backup wizard (dbmgui) to invoke a complete data backup which starts (pipes are created in the file system) and runs until approx. 4 gb...then the backup terminates with error "-24920 backup operation was unsuccesful. The database request failed with error -8020"
    what might be the problem? are the staging areas too small? is says that when using only one stage area it must hold the complete database...can i then conclude that 4 staging areas must hold the database as well - meaning they must be around 15 gb each?
    GreetZ, AH

    yup...but that also means that the sum of the staging areas need as much space as the database ie. around 70 gigs netto! the staging files will be filled and thus grow until the defined size, meaning that the file system(s) need to be big enough!
    everything works fine until the stage files are filled (up to the defined size) and then stops...when i change the sizes of the staging areas i can reproduce the error!
    i compared the tsm implementation of that system with another system (live cache...kind of small regarding db size...) and found nothing serious...
    -> dbm.ebp (AIX 5.3, MaxDB 7.6.0.033, ERP2005, TSM 5.3.3.0)
    2006-11-09 13:56:50
    Using environment variable 'TEMP' with value '' as directory for temporary files and pipes.
    Using environment variable 'TMP' with value '' as directory for temporary files and pipes.
    Using connection to Backint for MaxDB Interface.
    2006-11-09 13:56:50
    Checking existence and configuration of Backint for MaxDB.
        Using configuration variable 'BSI_ENV' = '/sapdb/data/wrk/TDV/bsi.env' as path of the configuration file of Backint for MaxDB.
        Setting environment variable 'BSI_ENV' for the path of the configuration file of Backint for MaxDB from '/sapdba/data/wrk/TDV/bsi.env' to configuration value '/sapdb/data/wrk/TDV/bsi.env'.
        Reading the Backint for MaxDB configuration file '/sapdb/data/wrk/TDV/bsi.env'.
            The following line of the Backint for MaxDB configuration file does not start with a proper keyword and is ignored:
            The following line of the Backint for MaxDB configuration file does not start with a proper keyword and is ignored:
            The following line of the Backint for MaxDB configuration file does not start with a proper keyword and is ignored:
            The following line of the Backint for MaxDB configuration file does not start with a proper keyword and is ignored:
            Found keyword 'BACKINT' with value '/sapdb/TDV/db/bin/backint'.
            The following line of the Backint for MaxDB configuration file does not start with a proper keyword and is ignored:
            Found keyword 'INPUT' with value '/sapdb/TDV/backup/backint4sapdb.in'.
            Found keyword 'OUTPUT' with value '/sapdb/TDV/backup/backint4sapdb.out'.
            Found keyword 'ERROROUTPUT' with value '/sapdb/TDV/backup/backint4sapdb.err'.
            The following line of the Backint for MaxDB configuration file does not start with a proper keyword and is ignored:
            Found keyword 'PARAMETERFILE' with value '/sapdb/data/wrk/TDV/backint4sapdb.par'.
            Found keyword 'TIMEOUT_SUCCESS' with value '6000'.
            Found keyword 'TIMEOUT_FAILURE' with value '300'.
        Finished reading of the Backint for MaxDB configuration file.
        Using '/sapdb/TDV/db/bin/backint' as Backint for MaxDB program.
        Using '/sapdb/TDV/backup/backint4sapdb.in' as input file for Backint for MaxDB.
        Using '/sapdb/TDV/backup/backint4sapdb.out' as output file for Backint for MaxDB.
        Using '/sapdb/TDV/backup/backint4sapdb.err' as error output file for Backint for MaxDB.
        Using '/sapdb/data/wrk/TDV/backint4sapdb.par' as parameter file for Backint for MaxDB.
        Using '6000' seconds as timeout for Backint for MaxDB in the case of success.
        Using '300' seconds as timeout for Backint for MaxDB in the case of failure.
        Using '/sapdb/data/wrk/TDV/dbm.knl' as backup history of a database to migrate.
        Using '/sapdb/data/wrk/TDV/dbm.ebf' as external backup history of a database to migrate.
        Checking availability of backups using backint's inquire function.
    Check passed successful.
    2006-11-09 13:56:50
    Checking medium.
    Check passed successfully.
    2006-11-09 13:56:50
    Preparing backup.
        Setting environment variable 'BI_CALLER' to value 'DBMSRV'.
        Setting environment variable 'BI_REQUEST' to value 'NEW'.
        Setting environment variable 'BI_BACKUP' to value 'FULL'.
        Constructed Backint for MaxDB call '/sapdb/TDV/db/bin/backint -u TDV -f backup -t file -p /sapdb/data/wrk/TDV/backint4sapdb.par -i /sapdb/TDV/backup/backint4sapdb.in -c'.
        Created temporary file '/sapdb/TDV/backup/backint4sapdb.out' as output for Backint for MaxDB.
        Created temporary file '/sapdb/TDV/backup/backint4sapdb.err' as error output for Backint for MaxDB.
        Writing '/sapdb/TDV/backup/back-tdv-data-m10.pipe #PIPE' to the input file.
        Writing '/sapdb/TDV/backup/back-tdv-data-m11.pipe #PIPE' to the input file.
    Prepare passed successfully.
    2006-11-09 13:56:51
    Creating pipes for data transfer.
        Creating pipe '/sapdb/TDV/backup/back-tdv-data-m10.pipe' ... Done.
        Creating pipe '/sapdb/TDV/backup/back-tdv-data-m11.pipe' ... Done.
    All data transfer pipes have been created.
    2006-11-09 13:56:51
    Starting database action for the backup.
        Requesting 'SAVE DATA QUICK TO '/sapdb/TDV/backup/back-tdv-data-m10.pipe' PIPE,'/sapdb/TDV/backup/back-tdv-data-m11.pipe' PIPE BLOCKSIZE 8 NO CHECKPOINT MEDIANAME 'BACK-TDV-DATA-G1'' from db-kernel.
    The database is working on the request.
    2006-11-09 13:56:51
    Waiting until database has prepared the backup.
        Asking for state of database.
        2006-11-09 13:56:51 Database is still preparing the backup.
        Waiting 1 second ... Done.
        Asking for state of database.
        2006-11-09 13:56:52 Database is still preparing the backup.
        Waiting 2 seconds ... Done.
        Asking for state of database.
        2006-11-09 13:56:54 Database is still preparing the backup.
        Waiting 3 seconds ... Done.
        Asking for state of database.
        2006-11-09 13:56:57 Database is still preparing the backup.
        Waiting 4 seconds ... Done.
        Asking for state of database.
        2006-11-09 13:57:01 Database is still preparing the backup.
        Waiting 5 seconds ... Done.
        Asking for state of database.
        2006-11-09 13:57:06 Database has finished preparation of the backup.
    The database has prepared the backup successfully.
    2006-11-09 13:57:06
    Starting Backint for MaxDB.
        Starting Backint for MaxDB process '/sapdb/TDV/db/bin/backint -u TDV -f backup -t file -p /sapdb/data/wrk/TDV/backint4sapdb.par -i /sapdb/TDV/backup/backint4sapdb.in -c >>/sapdb/TDV/backup/backint4sapdb.out 2>>/sapdb/TDV/backup/backint4sapdb.err'.
        Process was started successfully.
    Backint for MaxDB has been started successfully.
    2006-11-09 13:57:06
    Waiting for end of the backup operation.
        2006-11-09 13:57:06 The backup tool is running.
        2006-11-09 13:57:06 The database is working on the request.
        2006-11-09 13:57:11 The backup tool is running.
        2006-11-09 13:57:11 The database is working on the request.
        2006-11-09 13:57:21 The backup tool is running.
        2006-11-09 13:57:21 The database is working on the request.
        2006-11-09 13:57:36 The backup tool is running.
        2006-11-09 13:57:36 The database is working on the request.
        2006-11-09 13:57:56 The backup tool is running.
        2006-11-09 13:57:56 The database is working on the request.
        2006-11-09 13:58:21 The backup tool is running.
        2006-11-09 13:58:21 The database is working on the request.
        2006-11-09 13:58:51 The backup tool is running.
        2006-11-09 13:58:51 The database is working on the request.
        2006-11-09 13:59:21 The database has finished work on the request.
        Receiving a reply from the database kernel.
        Got the following reply from db-kernel:
            SQL-Code              :-8020
            Date                  :20061109
            Time                  :00135703
            Database              :TDV
            Server                :r4335
            KernelVersion         :Kernel    7.6.00   Build 033-123-130-873
            PagesTransfered       :377688
            PagesLeft             :6903331
            MediaName             :BACK-TDV-DATA-G1
            Location              :/sapdb/TDV/backup/back-tdv-data-m10.pipe
            Errortext             :end of file
            Label                 :DAT_000000017
            IsConsistent          :true
            FirstLogPageNo        :247812
            DBStamp1Date          :20061109
            DBStamp1Time          :00135651
            BDPageCount           :7280971
            DevicesUsed           :2
            DatabaseID            :r4335:TDV_20061109_135703
            Max Used Data Page    :0
            Converter Page Count  :5201
        2006-11-09 13:59:21 The backup tool is running.
        2006-11-09 13:59:22 The backup tool process has finished work with return code 2.
        2006-11-09 13:59:22 The backup tool is not running.
    The backup operation has ended.
    2006-11-09 13:59:22
    Filling reply buffer.
        Have encountered error -24920:
            The backup tool failed with 2 as sum of exit codes. The database request failed with error -8020.
        Constructed the following reply:
            ERR
            -24920,ERR_BACKUPOP: backup operation was unsuccessful
            The backup tool failed with 2 as sum of exit codes. The database request failed with error -8020.
    Reply buffer filled.
    2006-11-09 13:59:22
    Cleaning up.
        Removing data transfer pipes.
            Removing data transfer pipe /sapdb/TDV/backup/back-tdv-data-m11.pipe ... Done.
            Removing data transfer pipe /sapdb/TDV/backup/back-tdv-data-m10.pipe ... Done.
        Removed data transfer pipes successfully.
        Copying output of Backint for MaxDB to this file.
        ---------- Begin of output of Backint for MaxDB (/sapdb/TDV/backup/backint4sapdb.out)----------
            Reading parameter file /sapdb/data/wrk/TDV/backint4sapdb.par.
            Using staging area /sapdb/TDV/backup/stage1 with a size of 1585446912 bytes.
            Using staging area /sapdb/TDV/backup/stage2 with a size of 1585446912 bytes.
            Using 1 file per Backint for Oracle call.
            Using /sapdb/TDV/dbs/backint as Backint for Oracle.
            Using /sapdb/TDV/dbs/initTDV.utl as parameterfile of Backint for Oracle.
            Using /sapdb/data/wrk/TDV/backint4oracle.his as history file.
            Using /sapdb/TDV/backup/backint4oracle.in as input of Backint for Oracle.
            Using /sapdb/TDV/backup/backint4oracle.out as output of Backint for Oracle.
            Using /sapdb/TDV/backup/backint4oracle.err as error output of Backint for Oracle.
            Using a maximal delay for a Backint for Oracle call of 60 seconds.
            Reading input file /sapdb/TDV/backup/backint4sapdb.in.
            Backing up pipe /sapdb/TDV/backup/back-tdv-data-m10.pipe.
            Backing up pipe /sapdb/TDV/backup/back-tdv-data-m11.pipe.
            Found 2 entries in the input file.
            Starting the backup.
            Starting pipe2file program(s).
            Waiting for creation of temporary files.
            1 temporary file is available for backup.
            Calling Backint for Oracle at 2006-11-09 13:59:20.
            Calling '/sapdb/TDV/dbs/backint -u TDV -f backup -t file -p /sapdb/TDV/dbs/initTDV.utl -i /sapdb/TDV/backup/backint4oracle.in -c' .
            Backint for Oracle ended at 2006-11-09 13:59:20 with return code 2.
            Backint for Oracle output:
            Backint for Oracle output:                          Data Protection for mySAP(R)
            Backint for Oracle output:
            Backint for Oracle output:              Interface between BR*Tools and Tivoli Storage Manager
            Backint for Oracle output:              - Version 5, Release 3, Modification 2.0  for AIX LF 64-bit -
            Backint for Oracle output:                    Build: 275  compiled on Nov 20 2005
            Backint for Oracle output:         (c) Copyright IBM Corporation, 1996, 2005, All Rights Reserved.
            Backint for Oracle output:
            Backint for Oracle output: BKI2027I: Using TSM-API version 5.3.3.0 (compiled with 5.3.0.0).
            Backint for Oracle output: BKI2000I: Successfully connected to ProLE on port tdpr3ora64.
            Backint for Oracle output: BKI0005I: Start of program at: Thu Nov  9 13:59:20 MEZ 2006 .
            Backint for Oracle output: BKI5014E: Tivoli Storage Manager Error:
            Backint for Oracle output: ANS1035S (RC406)  Options file '*' could not be found.
            Backint for Oracle output:
            Backint for Oracle output: BKI0020I: End of program at: Thu Nov  9 13:59:20 MEZ 2006 .
            Backint for Oracle output: BKI0021I: Elapsed time: 00 sec .
            Backint for Oracle output: BKI0024I: Return code is: 2.
            Backint for Oracle output:
            Backint for Oracle error output:
            Finished the backup unsuccessfully.
            #ERROR /sapdb/TDV/backup/back-tdv-data-m10.pipe
            #ERROR /sapdb/TDV/backup/back-tdv-data-m11.pipe
        ---------- End of output of Backint for MaxDB (/sapdb/TDV/backup/backint4sapdb.out)----------
        Removed Backint for MaxDB's temporary output file '/sapdb/TDV/backup/backint4sapdb.out'.
        Copying error output of Backint for MaxDB to this file.
        ---------- Begin of error output of Backint for MaxDB (/sapdb/TDV/backup/backint4sapdb.err)----------
            Backint for Oracle was unsuccessful.
        ---------- End of error output of Backint for MaxDB (/sapdb/TDV/backup/backint4sapdb.err)----------
        Removed Backint for MaxDB's temporary error output file '/sapdb/TDV/backup/backint4sapdb.err'.
        Removed the Backint for MaxDB input file '/sapdb/TDV/backup/backint4sapdb.in'.
    Have finished clean up successfully.
    i invoke the backup through dbmgui, not dbmcli!
    any clues? thx in advance!
    GreetZ, AH

  • GROUP BY - Is there a way to have some sort of for-each statement?

    Hi there,
    This discussion is a branch from https://forums.oracle.com/thread/2614679
    I data mart I created for a chain of theatres. The fact table contain information about ticket sales, and I have a some dimensions including DimClient and DimTime.
    Here is an example of each table:
    FactTicketPurchase
    TICKETPURCHASEID
    CLIENTID
    PRODUCTIONID
    THEATREID
    TIMEID
    TROWID
    SUMTOTALAMOUNT
    60006
    2527
    66
    21
    942
    40
    7
    60007
    2527
    72
    21
    988
    36
    6
    60008
    2527
    74
    21
    1001
    40
    6
    60009
    2527
    76
    21
    1015
    37
    6
    60010
    2527
    79
    21
    1037
    39
    6
    DDL for FactTicketPurchase
    CREATE TABLE FactTicketPurchase(
    TicketPurchaseID NUMBER(10) PRIMARY KEY,
    ClientID NUMBER(5) CONSTRAINT fk_client REFERENCES DimClient,
    -- ProductionID NUMBER(5) CONSTRAINT fk_prod REFERENCES DimProduction,
    -- TheatreID NUMBER(5) CONSTRAINT fk_theatre REFERENCES DimTheatre,
    TimeID NUMBER(6) CONSTRAINT fk_time REFERENCES DimTime,
    -- TRowID NUMBER(5) CONSTRAINT fk_trow REFERENCES DimTRow,
    SumTotalAmount NUMBER(22) NOT NULL);
    DimClient
    CLIENTID
    CLIENT#
    NAME
    TOWN
    COUNTY
    2503
    1
    LEE M1
    West Bridgford
    Nottingham
    2504
    2
    HELEN W2
    Hyson Green
    Nottingham
    2505
    3
    LEE M3
    Lenton Abbey
    Nottingham
    2506
    4
    LORA W4
    Beeston
    Nottingham
    2507
    5
    SCOTT M5
    Radford
    Nottingham
    2508
    6
    MINA W6
    Hyson Green
    Nottingham
        ..cff.
    DDL for DimClient
    CREATE TABLE DimClient(
    ClientID NUMBER(5) PRIMARY KEY,
    Name VARCHAR2(30) NOT NULL);
    DimTime
    TIMEID
    FULLDATE
    YEAR
    SEASON
    MONTH
    MONTHDAY
    WEEK
    WEEKDAY
    817
    02-MAR-10
    2010
    Spring
    3
    2
    9
    3
    818
    03-MAR-10
    2010
    Spring
    3
    3
    9
    4
    819
    04-MAR-10
    2010
    Spring
    3
    4
    9
    5
    820
    05-MAR-10
    2010
    Spring
    3
    5
    9
    6
    821
    06-MAR-10
    2010
    Spring
    3
    6
    9
    7
    822
    07-MAR-10
    2010
    Spring
    3
    7
    9
    1
    DDL for DimTime
    CREATE TABLE DimTime(
    TimeID NUMBER(6) PRIMARY KEY,
    Year NUMBER(4) NOT NULL,
    Season VARCHAR2(20));
    I have the following analysis request to perform on this data mart:
    Top 5 clients by value of ticket sale for each season
    For this requirement I came up with the following query:
    SELECT * FROM
    (SELECT FacTIC.ClientID, DimCLI.Name, SUM(SumtotalAmount) SumTotalAmount, DimTIM.Season
    FROM FactTicketPurchase FacTIC, DimClient DimCLI, DimTime DimTIM
    WHERE FacTIC.ClientID = DimCLI.ClientID
    AND FacTIC.TimeID = DimTIM.TimeID
    AND Season = 'Spring'  AND Year = 2010
    GROUP BY Season, FacTIC.ClientID, DimCLI.Name
    ORDER BY Season ASC, SumTotalAmount DESC)
    WHERE rownum <=5;
    As you can see, in line 06 of the above query, I am explicitly specifying the season for the query to return.
    However what I would like to do is just one query that could autocratically go through the seasons and years available in the time dimension in a fashion similar to a FOR-EACH statement. This way, if we get more years added to the time dimension, we wouldn't have to amend the query.
    Is this possible?
    Regards,
    P.

    I think I fixed it!
    The trick was to look into the r_num value. As soon as I added it to my query I started to see how r_num was being calculated and I realised that I had to add Season to my partition, right after Year.
    SELECT Year, Season, TotalAmount, Name
    FROM (
       SELECT   DimCLI.Name
       ,        DimTIM.Year
       ,        DIMTIM.Season
       ,        SUM(FacTIC.SumTotalAmount) TotalAmount
       ,        RANK() OVER (PARTITION BY Year, Season
                             ORDER BY SUM(FacTIC.SumTotalAmount) DESC
                            ) AS r_num
       FROM     FactTicketPurchase FacTIC
       ,        DimClient DimCLI
      ,         DimTime DimTIM
       WHERE    FacTIC.ClientID = DimCLI.ClientID
       AND      FacTIC.TimeID = DimTIM.TimeID
       GROUP BY DimTIM.Year
       ,        DimTIM.Season
       ,        DimCLI.Name
    WHERE r_num <= 5 -- Need to amend this line on my data sample to show 2 rows.
    ORDER BY Year, Season, TotalAmount DESC;
    Looking at my data sample, I got the following:
    YEAR
    SEASON
    TOTALAMOUNT
    CLIENTID
    2010
    Autumn
    29
    2504
    2010
    Autumn
    26
    2503
    2010
    Spring
    25
    2503
    2010
    Spring
    14
    2506
    2010
    Summer
    26
    2506
    2010
    Summer
    26
    2504
    2010
    Winter
    28
    2503
    2010
    Winter
    26
    2506
    2011
    Autumn
    23
    2506
    2011
    Autumn
    14
    2503
    2011
    Spring
    25
    2505
    2011
    Spring
    13
    2503
    2011
    Summer
    21
    2505
    2011
    Summer
    14
    2503
    2011
    Winter
    19
    2505
    Now, looking at my real data, (considering the top 5 rows, not the top 2), I got:
    YEAR
    SEASON
    TOTALAMOUNT
    NAME
    2010
    Autumn
    141
    BUSH M225
    2010
    Autumn
    140
    DIANA W66
    2010
    Autumn
    136
    HANA W232
    2010
    Autumn
    120
    DIANA W220
    2010
    Autumn
    120
    WILSON M459
    2010
    Spring
    137
    DAVID M469
    2010
    Spring
    125
    ALEX M125
    2010
    Spring
    124
    PETER M269
    2010
    Spring
    115
    ZHOU M463
    2010
    Spring
    114
    TANIA W304
    2010
    Summer
    138
    JANE W404
    2010
    Summer
    105
    MINA W8
    2010
    Summer
    97
    DAVID M275
    2010
    Summer
    96
    CLINTON M483
    2010
    Summer
    93
    ANNA W288
    2011
    Spring
    12
    LUISE W20
    2011
    Spring
    7
    ANNA W432
    2011
    Spring
    7
    LEE M409
    2011
    Spring
    7
    CHRIS W274
    2011
    Spring
    7
    HELEN W136
    2011
    Spring
    7
    LILY W114
    2011
    Spring
    7
    LUISE W348
    2011
    Spring
    7
    LIU M107
    2011
    Spring
    7
    VICTORY W194
    2011
    Spring
    7
    DIANA W240
    2011
    Spring
    7
    HELEN W120
    2011
    Spring
    7
    LILY W296
    2011
    Spring
    7
    MATTHEW M389
    2011
    Spring
    7
    PACO M343
    2011
    Spring
    7
    YANG M411
    2011
    Spring
    7
    ERIC M101
    2011
    Spring
    7
    ALEX M181
    2011
    Spring
    7
    SMITH M289
    2011
    Spring
    7
    DIANA W360
    2011
    Spring
    7
    MATTHEW M63
    2011
    Spring
    7
    SALLY W170
    2011
    Spring
    7
    JENNY W258
    2011
    Spring
    7

  • How to create a DSN for Oracle Provider for OLE DB in a web server

    Dear Guys,
    I am a Excel VBA developer.
    My requirement is from the Excel I have to call a Stored Procedure with REF CURSOR.
    Normally I am using Microsoft ODBC for Oracle Driver for connecting Oracle DB, which is in the Server .
    We have users using the Excel reports across the globe.
    Sending Excel report is enough, the clients can connect DB from Excel via the DSN created in a web server.
    But, I came to know that we can't access the REF CURSOR using Microsoft ODBC for Oracle Driver and it is possible to access by using ORA OLE DB Provider.
    I have installed Oracle Client in my machine and tried using ORA OLE DB Provider like below
    +con.ConnectionString = "Provider=OraOLEDB.Oracle.1;User ID=user_name;" & _+
    +"Password=pwd;Data Source=Oracle;"+
    The Excel worked fine in my machine but when I run the same Excel in my user machine in a different country I couldn't connect to DB.
    Because the user machine doesn't have Oracle Client installed. We have n number of users across the world and we can't install Oracle client individually.
    So, I have the plan of creating a DSN in a web sever as I used for Microsoft ODBC for Oracle Driver.
    But, my doubt is how can I create a DSN for accessing ORA OLE DB provider? Is there any driver for ORA OLE DB provider? or is there any alternate solution for my issue?
    Can anybody help me on this ASAP?
    Thanks & Regards,
    Satz

    I have created a DSN in a web server (a Public IP machine) that is mapped to a Oracle DB.
    In my Excel using VBA coding, with the help of RDO object I will call the DSN in the Web server using the connection string like "DSN=ORS;UID=SDATA;PWD=SDATA;"
    This is working fine and in this case the client machine doesn't need Oracle client to be installed or any TNS entry.
    The user can run the Excel report by clicking a button and the click event connects the DSN in the web server (through its URL) and routes to the mapped DB and fetches the quried data.
    Please note that the above DSN is created based on Microsoft ODBC for Oracle driver.
    But the issue is using the Microsoft ODBC for Oracle driver I couldnt call the SP with Ref Cursor.
    When I searched in Internet I came to know using the provider oraoledb.oracle we can call SP that uses REF CURSOR.
    Now my question is what is the driver name that I can use to create a DSN to make use of the provider oraoledb.oracle for calling the SP with REF CURSOR from Excel VBA coding ?
    Appreciate your prompt reply.
    Thanks & Regards,
    Sathish

  • Encountered ora-29701 during Sun Cluster for Oracle RAC 9.2.0.7 startup (UR

    Hi all,
    Need some help from all out there
    In our Sun Cluster 3.1 Data Service for Oracle RAC 9.2.0.7 (Solaris 9) configuration, my team had encountered
    ora-29701 *Unable to connect to Cluster Manager*
    during the startup of the Oracle RAC database instances on the Oracle RAC Server resources.
    We tried the attached workaround by Oracle. This workaround works well for the 1^st time but it doesn’t work anymore when the server is rebooted.
    Kindly help me to check whether anyone encounter the same problem as the above and able to resolve. Thanks.
    Bug No. 4262155
    Filed 25-MAR-2005 Updated 11-APR-2005
    Product Oracle Server - Enterprise Edition Product Version 9.2.0.6.0
    Platform Linux x86
    Platform Version 2.4.21-9.0.1
    Database Version 9.2.0.6.0
    Affects Platforms Port-Specific
    Severity Severe Loss of Service
    Status Not a Bug. To Filer
    Base Bug N/A
    Fixed in Product Version No Data
    Problem statement:
    ORA-29701 DURING DATABASE CREATION AFTER APPLYING 9.2.0.6 PATCHSET
    *** 03/25/05 07:32 am ***
    TAR:
    PROBLEM:
    Customer applied 9.2.0.6 patchset over 9.2.0.4 patchset.
    While creating the database, customer receives following error:
         ORA-29701: unable to connect to Cluster Manager
    However, if customer goes from 9.2.0.4 -> 9.2.0.5 -> 9.2.0.6, the problem does not occur.
    DIAGNOSTIC ANALYSIS:
    It seems that the problem is with libskgxn9.so shared library.
    For 9.2.0.4 -> 9.2.0.5 -> 9.2.0.6, the install log shows the following:
    installActions2005-03-22_03-44-42PM.log:,
    [libskgxn9.so->%ORACLE_HOME%/lib/libskgxn9.so 7933 plats=1=>[46]langs=1=> en,fr,ar,bn,pt_BR,bg,fr_CA,ca,hr,cs,da,nl,ar_EG,en_GB,et,fi,de,el,iw,hu,is,in, it,ja,ko,es,lv,lt,ms,es_MX,no,pl,pt,ro,ru,zh_CN,sk,sl,es_ES,sv,th,zh_TW, tr,uk,vi]]
    installActions2005-03-22_04-13-03PM.log:, [libcmdll.so ->%ORACLE_HOME%/lib/libskgxn9.so 64274 plats=1=>[46] langs=-554696704=>[en]]
    For 9.2.0.4 -> 9.2.0.6, install log shows:
    installActions2005-03-22_04-13-03PM.log:, [libcmdll.so ->%ORACLE_HOME%/lib/libskgxn9.so 64274 plats=1=>[46] langs=-554696704=>[en]] does not exist.
    This means that while patching from 9.2.0.4 -> 9.2.0.5, Installer copies the libcmdll.so library into libskgxn9.so, while patching from 9.2.0.4 -> 9.2.0.6 does not.
    ORACM is located in /app/oracle/ORACM which is different than ORACLE_HOME in customer's environment.
    WORKAROUND:
    Customer is using the following workaround:
    cd $ORACLE_HOME/rdbms/lib make -f ins_rdbms.mk rac_on ioracle ipc_udp
    RELATED BUGS:
    Bug 4169291

    Check if following MOS note helps.
    Series of ORA-7445 Errors After Applying 9.2.0.7.0 Patchset to 9.2.0.6.0 Database (Doc ID 373375.1)

  • "In-Memory Database Cache" option for Oracle 10g Enterprise Edition

    Hi,
    In one of our applications, we are using TimesTen 5.1.24 and Oracle 9i
    databases (platform - Solaris 9i).
    TimesTen holds application information which needs to be accessed quickly
    and Oracle 9i is a master application database.
    Now we are looking at an option of migrating from Oracle 9i to Oracle 10g
    database. While exploring about Oracle 10g features, came to know about
    "In-Memory Database Cache" option for Oracle Enterprise Edition. This made
    me to think about using Oracle 10g Enterprise Edition with "In-Memory
    Database Cache" option for our application.
    Following are the advantages that I could visualize by adopting the
    above-mentioned approach:
    1. Data reconciliation between Oracle and TimesTen is not required (i.e.
    data can be maintained only in Oracle tables and for caching "In-Memory
    Database Cache" can be used)
    2. Data maintenance is easy and gives one view access to data
    I have following queries regarding the above-mentioned solution:
    1. What is the difference between "TimesTen In-Memory Database" and
    "In-Memory Database Cache" in terms of features and licensing model?
    2. Is "In-Memory Database Cache" option integrated with Oracle 10g
    installable or a separate installable (i.e. TimesTen installable with only
    cache feature)?
    3. Is "In-Memory Database Cache" option same as that of "TimesTen Cache
    Connect to Oracle" option in TimesTen In-Memory Database?
    4. After integrating "In-Memory Database Cache" option with Oracle 10g, data
    access will happen only through Oracle sqlplus or OCI calls. Am I right here
    in making this statement?
    5. Is it possible to cache the result set of a join query in "In-Memory
    Database Cache"?
    In "Options and Packs" chapter in Oracle documentation
    (http://download.oracle.com/docs/cd/B19306_01/license.102/b14199/options.htm
    #CIHJJBGA), I encountered the following statement:
    "For the purposes of licensing Oracle In-Memory Database Cache, only the
    processors on which the TimesTen In-Memory Database component of the
    In-Memory Database Cache software is installed and/or running are counted
    for the purpose of determining the number of licenses required."
    We have servers with the following configuration. Is there a way to get the
    count of processors on which the Cache software could be installed and/or
    running? Please assist.
    Production box with 12 core 2 duo processors (24 cores)
    Pre-production box with 8 core 2 duo processors (16 cores)
    Development and test box with 2 single chip processors
    Development and test box with 4 single chip processors
    Development and test box with 6 single chip processors
    Thanks & Regards,
    Vijay

    Hi Vijay,
    regarding your questions:
    1. What is the difference between "TimesTen In-Memory Database" and
    "In-Memory Database Cache" in terms of features and licensing model?
    ==> Product has just been renamed and integrated better with the Oracle database - Times-Ten == In-Memory-Cache-Database
    2. Is "In-Memory Database Cache" option integrated with Oracle 10g
    installable or a separate installable (i.e. TimesTen installable with only
    cache feature)?
    ==> Seperate Installation
    3. Is "In-Memory Database Cache" option same as that of "TimesTen Cache
    Connect to Oracle" option in TimesTen In-Memory Database?
    ==> Please have a look here: http://www.oracle.com/technology/products/timesten/quickstart/cc_qs_index.html
    This explains the differences.
    4. After integrating "In-Memory Database Cache" option with Oracle 10g, data
    access will happen only through Oracle sqlplus or OCI calls. Am I right here
    in making this statement?
    ==> Please see above mentioned papers
    5. Is it possible to cache the result set of a join query in "In-Memory
    Database Cache"?
    ==> Again ... ;-)
    Kind regards
    Mike

  • Setting isolation level with JDriver for Oracle/XA

    edocs (http://e-docs.bea.com/wls/docs70/oracle/trxjdbcx.html#1080746) states that,
    if using jDriver for Oracle/XA you can not set the transaction isolation level
    for a transaction and that 'Transactions use the transaction isolation level set
    on the connection or the default transaction isolation level for the database'.
    Does this mean that you shouldn't try to set it programatically (fair enough)
    or that you can't set it in the weblogic deployment descriptor either? Also anybody
    got any idea what the default is likely to be if you are using an Oracle 9iR2
    database? Is this determined by some database setting?

    IJ wrote:
    edocs (http://e-docs.bea.com/wls/docs70/oracle/trxjdbcx.html#1080746) states that,
    if using jDriver for Oracle/XA you can not set the transaction isolation level
    for a transaction and that 'Transactions use the transaction isolation level set
    on the connection or the default transaction isolation level for the database'.
    Does this mean that you shouldn't try to set it programatically (fair enough)
    or that you can't set it in the weblogic deployment descriptor either? Also anybody
    got any idea what the default is likely to be if you are using an Oracle 9iR2
    database? Is this determined by some database setting?The system should honor the setting defined in the deployment descriptor,
    however, for oracle it may not be helpful to change it. Oracle provides two
    isolation levels. The default is always READ_COMMITTED. The other
    setting is SERIALIZABLE, but this hurts performance, and is also problematic
    in the way oracle implements it. For instance, even if you set SERIALIZABLE,
    oracle will not lock read data. It will allow other transactions to read and/or
    alter data trhat another ongoing SERIALIZABLE transaction has read. The
    only way to really lock read data in oracle is to issue oracle-specific SQL in
    your select: "SELECT ..... FOR UPDATE".
    All in all, you should collect a strong case for why you can't proceed with
    READ_COMMITTED first. Then you should research oracle's recommendations
    (and their problem record) with SERIALIZABLE.
    Joe Weinstein at BEA

  • Error in Installing RCU for Oracle Webcenter Content Imaging

    Hi,
    I am comming across this error while installing RCU for Oracle Webcenter Imaging with Mircosoft SQL 2005 as the backend database. I am pasting the IPM.log file below which shows the error during the installation. Please help me from this.
    JDBC SQLException - ErrorCode: 102SQLState:HY000 Message: [FMWGEN][SQLServer JDBC Driver][SQLServer]Incorrect syntax near 'Rem'.
    Error encountered executing SQL statement  FileName: 'D:\Oracle\middleware\RepositoryCreationUtility\rcuHome\\rcu\integration\\ipm\sql\createschema_ipm_sqlserver.sql' LineNumber: '18'
    SQL Statement: [execute as login = '$(SCHEMA_USER)'
    Rem Copyright (c) 2007, 2013, Oracle and/or its affiliates.
    Rem All rights reserved.
    CREATE TABLE DEFINITION (DEFINITION_ID NUMERIC(19) NOT NULL, DEFINITION_VERSION NUMERIC(19) NOT NULL, DEFINITION_TYPE $(RCU_VARCHAR)(16) NULL, DESCRIPTION $(RCU_VARCHAR)(1000) NULL, NAME $(RCU_VARCHAR)(200) NULL, PRIMARY KEY (DEFINITION_ID, DEFINITION_VERSION));
    CREATE TABLE DEFINITION_SECURITY (RID NUMERIC(19) NOT NULL, GUID $(RCU_VARCHAR)(255) NULL, DEFINITION_ID NUMERIC(19) NOT NULL, NAME $(RCU_VARCHAR)(255) NULL, SECURITYMEMBER_TYPE $(RCU_VARCHAR)(8) NULL, DEFINITION_VERSION NUMERIC(19) NOT NULL, CAN_DELETE SMALLINT NULL, CAN_GRANTACCESS SMALLINT NULL, CAN_MANAGE SMALLINT NULL, CAN_MODIFY SMALLINT NULL, CAN_VIEW SMALLINT NULL, PRIMARY KEY (RID, DEFINITION_ID, DEFINITION_VERSION));
    CREATE TABLE AUDIT_HISTORY (AUDITID NUMERIC(19) NOT NULL, AUDIT_DATE DATETIME NULL, PARAM1 $(RCU_VARCHAR)(255) NULL, PARAM2 $(RCU_VARCHAR)(255) NULL, SUBJECTID NUMERIC(19) NULL, AUDIT_TYPE $(RCU_VARCHAR)(32) NULL, USERNAME $(RCU_VARCHAR)(255) NULL, PRIMARY KEY (AUDITID));
    CREATE TABLE APP_PROPERTIES (FASTCHECKIN BIT default 0 NULL, FULLTEXTSEARCH BIT default 0 NULL, APP_ID NUMERIC(19) NOT NULL, NEXTFIELDID NUMERIC(19) NULL, APP_VERSION NUMERIC(19) NOT NULL, REPOSITORY_ID NUMERIC(19) NULL, REPOSITORY_NAME $(RCU_VARCHAR)(255) NULL, PRIMARY KEY (APP_ID, APP_VERSION));
    CREATE TABLE APP_FIELDDEFINITION (FIELD_ID NUMERIC(19) NOT NULL, APP_ID NUMERIC(19) NOT NULL, INDEXED BIT default 0 NULL, LENGTH INTEGER NULL, NAME $(RCU_VARCHAR)(50) NULL, REQUIRED BIT default 0 NULL, SCALE INTEGER NULL, FIELD_TYPE $(RCU_VARCHAR)(8) NULL, APP_VERSION NUMERIC(19) NOT NULL, DEFAULTVALUE_TYPE $(RCU_VARCHAR)(255) NULL, DEFAULTVALUE $(RCU_VARCHAR)(255) NULL, PRIMARY KEY (FIELD_ID, APP_ID, APP_VERSION));
    CREATE TABLE APP_PICKLIST (PICKLIST_ID NUMERIC(19) NOT NULL, APP_ID NUMERIC(19) NOT NULL, FIELD_ID NUMERIC(19) NOT NULL, APP_VERSION NUMERIC(19) NOT NULL, ITEMVALUE_TYPE $(RCU_VARCHAR)(255) NULL, ITEMVALUE $(RCU_VARCHAR)(255) NULL, PRIMARY KEY (PICKLIST_ID, APP_ID, FIELD_ID, APP_VERSION));
    CREATE TABLE APP_LIFECYCLEPOLICY (APP_ID NUMERIC(19) NOT NULL, RETENTIONPOLICY $(RCU_VARCHAR)(255) NULL, VOLUME_NAME $(RCU_VARCHAR)(200) NULL, APP_VERSION NUMERIC(19) NOT NULL, PRIMARY KEY (APP_ID, APP_VERSION));
    CREATE TABLE APP_BPELCONFIG (COMPOSITE $(RCU_VARCHAR)(255) NULL, ENABLED BIT default 0 NULL, APP_ID NUMERIC(19) NOT NULL, SERVICE_OPERATION $(RCU_VARCHAR)(255) NULL, SERVICE $(RCU_VARCHAR)(255) NULL, APP_VERSION NUMERIC(19) NOT NULL, CONNECTION_ID NUMERIC(19) NULL, CONNECTION_NAME $(RCU_VARCHAR)(255) NULL, PRIMARY KEY (APP_ID, APP_VERSION));
    CREATE TABLE APP_BPELPAYLOADITEM (PAYLOAD_KEY $(RCU_VARCHAR)(255) NOT NULL, APP_ID NUMERIC(19) NOT NULL, MAPPINGFUNCTION $(RCU_VARCHAR)(32) NULL, PAYLOAD_VALUE $(RCU_VARCHAR)(255) NULL, APP_VERSION NUMERIC(19) NOT NULL, XMLTYPE $(RCU_VARCHAR)(255) NULL, PRIMARY KEY (PAYLOAD_KEY, APP_ID, APP_VERSION));
    CREATE TABLE APP_STORAGESTAGE (SEQNUM NUMERIC(19) NOT NULL, DURATION INTEGER NULL, DURATIONUNIT $(RCU_VARCHAR)(8) NULL, APP_ID NUMERIC(19) NOT NULL, INDEFINITE BIT default 0 NULL, APP_VERSION NUMERIC(19) NOT NULL, VOLUME_NAME $(RCU_VARCHAR)(200) NULL, PRIMARY KEY (SEQNUM, APP_ID, APP_VERSION));
    CREATE TABLE DOCUMENT_SECURITY (RID NUMERIC(19) NOT NULL, GUID $(RCU_VARCHAR)(255) NULL, APP_ID NUMERIC(19) NOT NULL, NAME $(RCU_VARCHAR)(255) NULL, SECURITYMEMBER_TYPE $(RCU_VARCHAR)(8) NULL, APP_VERSION NUMERIC(19) NOT NULL, CAN_ANNOTHIDDEN SMALLINT NULL, CAN_ANNOTRESTRICTED SMALLINT NULL, CAN_ANNOTSTANDARD SMALLINT NULL, CAN_DELETE SMALLINT NULL, CAN_GRANTACCESS SMALLINT NULL, CAN_LOCKADMINISTRATOR SMALLINT NULL, CAN_VIEW SMALLINT NULL, CAN_WRITE SMALLINT NULL, PRIMARY KEY (RID, APP_ID, APP_VERSION));
    CREATE TABLE INPUT_PROPERTIES (INPUT_ID NUMERIC(19) NOT NULL, ON_LINE BIT default 0 NULL, PRIORITY INTEGER NULL, INPUT_VERSION NUMERIC(19) NOT NULL, PRIMARY KEY (INPUT_ID, INPUT_VERSION));
    CREATE TABLE INPUT_MAPPINGS (INPUT_ID NUMERIC(19) NOT NULL, FILEINPUTSECTION $(RCU_VARCHAR)(255) NULL, INPUT_VERSION NUMERIC(19) NOT NULL, APP_ID NUMERIC(19) NULL, APP_NAME $(RCU_VARCHAR)(255) NULL, PRIMARY KEY (INPUT_ID, INPUT_VERSION));
    CREATE TABLE INPUT_FIELDMAP (SEQNUM NUMERIC(19) NOT NULL, DATEFORMAT $(RCU_VARCHAR)(255) NULL, INPUT_ID NUMERIC(19) NOT NULL, INPUTSECTION $(RCU_VARCHAR)(255) NULL, USEDEFAULT BIT default 0 NULL, INPUT_VERSION NUMERIC(19) NOT NULL, APP_ID NUMERIC(19) NULL, APP_NAME $(RCU_VARCHAR)(255) NULL, PRIMARY KEY (SEQNUM, INPUT_ID, INPUT_VERSION));
    CREATE TABLE INPUT_SOURCEPROPERTIES (DATASAMPLE $(RCU_VARCHAR)(255) NULL, DATASOURCE $(RCU_VARCHAR)(255) NULL, INPUT_ID NUMERIC(19) NOT NULL, INPUTSOURCE $(RCU_VARCHAR)(255) NULL, INPUT_VERSION NUMERIC(19) NOT NULL, PRIMARY KEY (INPUT_ID, INPUT_VERSION));
    CREATE TABLE INPUT_SOURCESETTING (SETTING_KEY $(RCU_VARCHAR)(255) NOT NULL, INPUT_ID NUMERIC(19) NOT NULL, SETTING_VALUE $(RCU_VARCHAR)(255) NULL, INPUT_VERSION NUMERIC(19) NOT NULL, PRIMARY KEY (SETTING_KEY, INPUT_ID, INPUT_VERSION));
    CREATE TABLE SEARCH_PARAMETER (PARAMETER_ID NUMERIC(19) NOT NULL, DEFAULTOPERATOR INTEGER NULL, SEARCH_ID NUMERIC(19) NOT NULL, NAME $(RCU_VARCHAR)(255) NULL, OPERATORTEXT $(RCU_VARCHAR)(255) NULL, PICKLISTAPPLICATIONID NUMERIC(19) NULL, PICKLISTFIELDID NUMERIC(19) NULL, PROMPT $(RCU_VARCHAR)(255) NULL, READONLY BIT default 0 NULL, REQUIRED BIT default 0 NULL, SEARCH_VERSION NUMERIC(19) NOT NULL, SEARCHVALUETYPE INTEGER NULL, FIELD_TYPE $(RCU_VARCHAR)(8) NULL, FIELD_VALUE $(RCU_VARCHAR)(255) NULL, PRIMARY KEY (PARAMETER_ID, SEARCH_ID, SEARCH_VERSION));
    CREATE TABLE SEARCH_POSSIBLEOPERATOR (OPERATOR_ID NUMERIC(19) NOT NULL, SEARCH_ID NUMERIC(19) NOT NULL, OPERATOR INTEGER NULL, SEARCH_VERSION NUMERIC(19) NOT NULL, PARAMETER_ID NUMERIC(19) NULL, PRIMARY KEY (OPERATOR_ID, SEARCH_ID, SEARCH_VERSION));
    CREATE TABLE SEARCH_RESULTCOLUMN (RESULT_COLUMN_ID NUMERIC(19) NOT NULL, COLUMNTITLE $(RCU_VARCHAR)(255) NULL, SEARCH_ID NUMERIC(19) NOT NULL, SEARCH_VERSION NUMERIC(19) NOT NULL, PRIMARY KEY (RESULT_COLUMN_ID, SEARCH_ID, SEARCH_VERSION));
    CREATE TABLE SEARCH_SELECTEDFIELD (SELECTEDFIELDID NUMERIC(19) NOT NULL, SEARCH_ID NUMERIC(19) NOT NULL, PERSISTEDAPPLICATIONID NUMERIC(19) NULL, PERSISTEDFIELDID NUMERIC(19) NULL, PROPERTY INTEGER NULL, SEARCH_VERSION NUMERIC(19) NOT NULL, RESULT_COLUMN_ID NUMERIC(19) NULL, PRIMARY KEY (SELECTEDFIELDID, SEARCH_ID, SEARCH_VERSION));
    CREATE TABLE SEARCH_PROPERTIES (SEARCHPROPERTIESID NUMERIC(19) NOT NULL, SEARCH_ID NUMERIC(19) NOT NULL, MAXROWS INTEGER NULL, ON_LINE BIT default 0 NULL, SEARCHINSTRUCTIONS $(RCU_VARCHAR)(1000) NULL, SEARCH_VERSION NUMERIC(19) NOT NULL, PRIMARY KEY (SEARCHPROPERTIESID, SEARCH_ID, SEARCH_VERSION));
    CREATE TABLE SEARCH_NODE (SEARCHNODEID NUMERIC(19) NOT NULL, DTYPE $(RCU_VARCHAR)(31) NULL, ALWAYSDISPLAYPARENTHESES BIT default 0 NULL, SEARCH_ID NUMERIC(19) NOT NULL, LEFTID NUMERIC(19) NULL, RIGHTID NUMERIC(19) NULL, SEARCHOPERATOR INTEGER NULL, SEARCH_VERSION NUMERIC(19) NOT NULL, PARAMETERNAME $(RCU_VARCHAR)(255) NULL, PROPERTY INTEGER NULL, FIELD_ID NUMERIC(19) NULL, FIELD_NAME $(RCU_VARCHAR)(255) NULL, SEARCHVALUETYPE INTEGER NULL, FIELD_TYPE $(RCU_VARCHAR)(8) NULL, FIELD_VALUE $(RCU_VARCHAR)(255) NULL, PRIMARY KEY (SEARCHNODEID, SEARCH_ID, SEARCH_VERSION));
    CREATE TABLE SEARCH_APP_EXPR (APPLICATIONEXPRESSIONID NUMERIC(19) NOT NULL, SEARCH_ID NUMERIC(19) NOT NULL, ROOTID NUMERIC(19) NULL, SEARCH_VERSION NUMERIC(19) NOT NULL, APP_ID NUMERIC(19) NULL, APP_NAME $(RCU_VARCHAR)(255) NULL, PRIMARY KEY (APPLICATIONEXPRESSIONID, SEARCH_ID, SEARCH_VERSION));
    CREATE TABLE SEARCH_APP_ITEM (APPITEMID NUMERIC(19) NOT NULL, FIELDDECIMAL BIT default 0 NULL, FIELDSCALE INTEGER NULL, SEARCH_ID NUMERIC(19) NOT NULL, REPOSITORYDOCUMENTID $(RCU_VARCHAR)(255) NULL, REPOSITORYFIELDID $(RCU_VARCHAR)(255) NULL, SEARCH_VERSION NUMERIC(19) NOT NULL, APPLICATION_ID NUMERIC(19) NULL, APPLICATION_NAME $(RCU_VARCHAR)(255) NULL, FIELD_ID NUMERIC(19) NULL, FIELD_NAME $(RCU_VARCHAR)(255) NULL, SOURCE_ID NUMERIC(19) NULL, SOURCE_NAME $(RCU_VARCHAR)(255) NULL, PRIMARY KEY (APPITEMID, SEARCH_ID, SEARCH_VERSION));
    CREATE TABLE PREFERENCES (CONTEXT $(RCU_VARCHAR)(255) NOT NULL, USERGUID $(RCU_VARCHAR)(255) NOT NULL, SETTING $(RCU_VARCHAR)(255) NULL, PRIMARY KEY (CONTEXT, USERGUID));
    CREATE TABLE SYSTEM_SECURITY (SYSTEMAREA $(RCU_VARCHAR)(16) NOT NULL, RID NUMERIC(19) NOT NULL, GUID $(RCU_VARCHAR)(255) NULL, NAME $(RCU_VARCHAR)(255) NULL, SECURITYMEMBER_TYPE $(RCU_VARCHAR)(8) NULL, CAN_ADMINISTOR SMALLINT NULL, CAN_CREATE SMALLINT NULL, PRIMARY KEY (SYSTEMAREA, RID));
    CREATE TABLE BATCH (BATCHID NUMERIC(19) NOT NULL, CREATEDBYPROCESS $(RCU_VARCHAR)(255) NULL, CURRENTSTATE $(RCU_VARCHAR)(255) NULL, ENDTIME DATETIME NULL, FAILEDDOCCOUNT INTEGER NULL, SOURCENAME $(RCU_VARCHAR)(255) NULL, SOURCETIME DATETIME NULL, STARTTIME DATETIME NULL, SUCCESSFULDOCCOUNT INTEGER NULL, INPUT_ID NUMERIC(19) NULL, INPUT_NAME $(RCU_VARCHAR)(255) NULL, APP_ID NUMERIC(19) NULL, APP_NAME $(RCU_VARCHAR)(255) NULL, PRIMARY KEY (BATCHID));
    CREATE TABLE TICKETS (TICKET_ID $(RCU_VARCHAR)(255) NOT NULL, EXPIRATION DATETIME NULL, SINGLEUSE BIT default 0 NULL, TICKETEXPIRES BIT default 0 NULL, PRIMARY KEY (TICKET_ID));
    CREATE TABLE TICKETPARAMETERS (ID INTEGER NOT NULL, PARAMETERNAME $(RCU_VARCHAR)(255) NULL, PARAMETERVALUE $(RCU_VARCHAR)(255) NULL, PARAM_TICKET_ID $(RCU_VARCHAR)(255) NULL, PRIMARY KEY (ID));
    CREATE TABLE CONNECTION_PROPERTIES (CONNECTION_TYPE $(RCU_VARCHAR)(255) NULL, CONNECTION_ID NUMERIC(19) NOT NULL, CONNECTION_VERSION NUMERIC(19) NOT NULL, PRIMARY KEY (CONNECTION_ID, CONNECTION_VERSION));
    CREATE TABLE CONNECTION_DETAILS (DETAILKEY $(RCU_VARCHAR)(255) NOT NULL, DETAILVALUE $(RCU_VARCHAR)(255) NULL, CONNECTION_ID NUMERIC(19) NOT NULL, CONNECTION_VERSION NUMERIC(19) NOT NULL, PRIMARY KEY (DETAILKEY, CONNECTION_ID, CONNECTION_VERSION));
    CREATE TABLE REPOSITORY_APPCONTEXT (APP_ID NUMERIC(19) NOT NULL, PRIMARY KEY (APP_ID));
    CREATE TABLE REPOSITORY_APPDETAILS (DETAILKEY $(RCU_VARCHAR)(255) NOT NULL, APP_ID NUMERIC(19) NOT NULL, DETAILVALUE $(RCU_VARCHAR)(255) NULL, PRIMARY KEY (DETAILKEY, APP_ID));
    CREATE TABLE FILINGSTATE (ID INTEGER NOT NULL, BATCHID NUMERIC(19) NULL, DOCUMENTNUMBER NUMERIC(19) NULL, GOODDOCCOUNT NUMERIC(19) NULL, INPUTSOURCENAME $(RCU_VARCHAR)(255) NULL, JOBDATASOURCE $(RCU_VARCHAR)(255) NULL, APP_ID NUMERIC(19) NULL, APP_NAME $(RCU_VARCHAR)(255) NULL, INPUT_ID NUMERIC(19) NULL, INPUT_NAME $(RCU_VARCHAR)(255) NULL, PRIMARY KEY (ID));
    CREATE TABLE BPEL_FAULT_DATA (DOCID $(RCU_VARCHAR)(40) NOT NULL, APPID NUMERIC(19) NULL, BATCHID NUMERIC(19) NULL, FAULTDATE DATETIME NULL, FAULTMESSAGE $(RCU_VARCHAR)(1000) NULL, PRIMARY KEY (DOCID));
    ALTER TABLE DEFINITION_SECURITY ADD CONSTRAINT DFNTONSECURITYDFNTONID FOREIGN KEY (DEFINITION_ID, DEFINITION_VERSION) REFERENCES DEFINITION (DEFINITION_ID, DEFINITION_VERSION);
    ALTER TABLE APP_PROPERTIES ADD CONSTRAINT APP_PROPERTIES_APP_ID FOREIGN KEY (APP_ID, APP_VERSION) REFERENCES DEFINITION (DEFINITION_ID, DEFINITION_VERSION);
    ALTER TABLE APP_FIELDDEFINITION ADD CONSTRAINT PPFIELDDEFINITIONAPPID FOREIGN KEY (APP_ID, APP_VERSION) REFERENCES DEFINITION (DEFINITION_ID, DEFINITION_VERSION);
    ALTER TABLE APP_PICKLIST ADD CONSTRAINT APP_PICKLIST_FIELD_ID FOREIGN KEY (FIELD_ID, APP_ID, APP_VERSION) REFERENCES APP_FIELDDEFINITION (FIELD_ID, APP_ID, APP_VERSION);
    ALTER TABLE APP_LIFECYCLEPOLICY ADD CONSTRAINT PPLIFECYCLEPOLICYAPPID FOREIGN KEY (APP_ID, APP_VERSION) REFERENCES DEFINITION (DEFINITION_ID, DEFINITION_VERSION);
    ALTER TABLE APP_BPELCONFIG ADD CONSTRAINT APP_BPELCONFIG_APP_ID FOREIGN KEY (APP_ID, APP_VERSION) REFERENCES DEFINITION (DEFINITION_ID, DEFINITION_VERSION);
    ALTER TABLE APP_BPELPAYLOADITEM ADD CONSTRAINT PPBPELPAYLOADITEMAPPID FOREIGN KEY (APP_ID, APP_VERSION) REFERENCES APP_BPELCONFIG (APP_ID, APP_VERSION);
    ALTER TABLE APP_STORAGESTAGE ADD CONSTRAINT APP_STORAGESTAGEAPP_ID FOREIGN KEY (APP_ID, APP_VERSION) REFERENCES APP_LIFECYCLEPOLICY (APP_ID, APP_VERSION);
    ALTER TABLE DOCUMENT_SECURITY ADD CONSTRAINT DOCUMENTSECURITYAPP_ID FOREIGN KEY (APP_ID, APP_VERSION) REFERENCES DEFINITION (DEFINITION_ID, DEFINITION_VERSION);
    ALTER TABLE INPUT_PROPERTIES ADD CONSTRAINT INPUTPROPERTIESINPUTID FOREIGN KEY (INPUT_ID, INPUT_VERSION) REFERENCES DEFINITION (DEFINITION_ID, DEFINITION_VERSION);
    ALTER TABLE INPUT_MAPPINGS ADD CONSTRAINT INPUT_MAPPINGSINPUT_ID FOREIGN KEY (INPUT_ID, INPUT_VERSION) REFERENCES DEFINITION (DEFINITION_ID, DEFINITION_VERSION);
    ALTER TABLE INPUT_FIELDMAP ADD CONSTRAINT INPUT_FIELDMAPINPUT_ID FOREIGN KEY (INPUT_ID, INPUT_VERSION) REFERENCES INPUT_MAPPINGS (INPUT_ID, INPUT_VERSION);
    ALTER TABLE INPUT_SOURCEPROPERTIES ADD CONSTRAINT NPTSURCEPROPERTIESNPTD FOREIGN KEY (INPUT_ID, INPUT_VERSION) REFERENCES DEFINITION (DEFINITION_ID, DEFINITION_VERSION);
    ALTER TABLE INPUT_SOURCESETTING ADD CONSTRAINT NPTSOURCESETTINGNPUTID FOREIGN KEY (INPUT_ID, INPUT_VERSION) REFERENCES INPUT_SOURCEPROPERTIES (INPUT_ID, INPUT_VERSION);
    ALTER TABLE SEARCH_PARAMETER ADD CONSTRAINT SARCHPARAMETERSEARCHID FOREIGN KEY (SEARCH_ID, SEARCH_VERSION) REFERENCES DEFINITION (DEFINITION_ID, DEFINITION_VERSION);
    ALTER TABLE SEARCH_POSSIBLEOPERATOR ADD CONSTRAINT SRCHPSSBLPERATORPRMTRD FOREIGN KEY (PARAMETER_ID, SEARCH_ID, SEARCH_VERSION) REFERENCES SEARCH_PARAMETER (PARAMETER_ID, SEARCH_ID, SEARCH_VERSION);
    ALTER TABLE SEARCH_RESULTCOLUMN ADD CONSTRAINT SRCHRESULTCOLUMNSRCHID FOREIGN KEY (SEARCH_ID, SEARCH_VERSION) REFERENCES DEFINITION (DEFINITION_ID, DEFINITION_VERSION);
    ALTER TABLE SEARCH_SELECTEDFIELD ADD CONSTRAINT SRCHSLCTDFELDRSLTCLMND FOREIGN KEY (RESULT_COLUMN_ID, SEARCH_ID, SEARCH_VERSION) REFERENCES SEARCH_RESULTCOLUMN (RESULT_COLUMN_ID, SEARCH_ID, SEARCH_VERSION);
    ALTER TABLE SEARCH_PROPERTIES ADD CONSTRAINT SARCHPROPERTIESSARCHID FOREIGN KEY (SEARCH_ID, SEARCH_VERSION) REFERENCES DEFINITION (DEFINITION_ID, DEFINITION_VERSION);
    ALTER TABLE SEARCH_NODE ADD CONSTRAINT SEARCH_NODE_SEARCH_ID FOREIGN KEY (SEARCH_ID, SEARCH_VERSION) REFERENCES DEFINITION (DEFINITION_ID, DEFINITION_VERSION);
    ALTER TABLE SEARCH_APP_EXPR ADD CONSTRAINT SEARCHAPPEXPRSEARCH_ID FOREIGN KEY (SEARCH_ID, SEARCH_VERSION) REFERENCES DEFINITION (DEFINITION_ID, DEFINITION_VERSION);
    ALTER TABLE SEARCH_APP_ITEM ADD CONSTRAINT SEARCHAPPITEMSEARCH_ID FOREIGN KEY (SEARCH_ID, SEARCH_VERSION) REFERENCES DEFINITION (DEFINITION_ID, DEFINITION_VERSION);
    ALTER TABLE TICKETPARAMETERS ADD CONSTRAINT TCKTPRMETERSPRMTCKETID FOREIGN KEY (PARAM_TICKET_ID) REFERENCES TICKETS (TICKET_ID);
    ALTER TABLE CONNECTION_PROPERTIES ADD CONSTRAINT CNNCTNPRPERTIESCNNCTND FOREIGN KEY (CONNECTION_ID, CONNECTION_VERSION) REFERENCES DEFINITION (DEFINITION_ID, DEFINITION_VERSION);
    ALTER TABLE CONNECTION_DETAILS ADD CONSTRAINT CNNCTNDETAILSCNNCTONID FOREIGN KEY (CONNECTION_ID, CONNECTION_VERSION) REFERENCES DEFINITION (DEFINITION_ID, DEFINITION_VERSION);
    ALTER TABLE REPOSITORY_APPDETAILS ADD CONSTRAINT RPSITORYAPPDETAILSPPID FOREIGN KEY (APP_ID) REFERENCES REPOSITORY_APPCONTEXT (APP_ID);
    CREATE TABLE SEQUENCE (SEQ_NAME $(RCU_VARCHAR)(50) NOT NULL, SEQ_COUNT NUMERIC(28) NULL, PRIMARY KEY (SEQ_NAME));
    INSERT INTO SEQUENCE(SEQ_NAME, SEQ_COUNT) values ('SEQ_SECURITY', 0);
    INSERT INTO SEQUENCE(SEQ_NAME, SEQ_COUNT) values ('SEQ_BATCH', 0);
    INSERT INTO SEQUENCE(SEQ_NAME, SEQ_COUNT) values ('SEQ_FILINGSTATE', 0);
    INSERT INTO SEQUENCE(SEQ_NAME, SEQ_COUNT) values ('SEQ_NAMEID', 0);
    INSERT INTO SEQUENCE(SEQ_NAME, SEQ_COUNT) values ('SEQ_GEN', 0);
    INSERT INTO SEQUENCE(SEQ_NAME, SEQ_COUNT) values ('SEQ_TICKET_PARAM', 0);
    INSERT INTO SEQUENCE(SEQ_NAME, SEQ_COUNT) values ('SEQ_AUDIT', 0);
    revert
    java.sql.SQLException: [FMWGEN][SQLServer JDBC Driver][SQLServer]Incorrect syntax near 'Rem'.
    Thanks,
    Kumar G

    Hi,
    I just had this messages yesterday during my install.
    You have to update those 2 files:
    <RCU Home>\rcu\integration\ipm\sql\createtables_ipm_sqlserver.sql
    <RCU Home>\rcu\integration\ipm\sql\createuser_ipm_sqlserver.sql
    The first 2 lines begin with "Rem ..." which is not correct for SQL Server : I added "--" and it worked.
    Regards,
    Guénaël

  • How to install and where to get kernal source for Oracle VM Server

    Hi guys
    Can someone point me in the right direction for oracle VM Server kernal source, as the server I am using won't ping, and found its cause it doesnt recognise the network adapter, says there is no drivers.
    But the HP website states I have to have the kernal source install so that I can build the driver.
    Any suggestions or help on the matter?
    Thanks

    Thanks
    I got my server to ping.
    Now I have ran into more trouble.
    I have been following ORacles installation guide for installing a Virtual Machine, but am struggling to install it, as their instructions don't match whats on the screen.
    They tell you to copy each CD into a directory.....
    and then mount that directory as an NFS mount.
    I done all that
    and then it tells u to do virt-install
    So I do that, and it all works great, until the end part, its mean to ask you what is your installation souce directory , and your supposed to put nfs:selfhostname_ip_address:/path/to/files
    and away you go.
    However my server says " what would you like to use as your virtual CD image"
    and so I type nfs:server_ip_address:/path/to/files etc, and I get the error CD MUST EXIST
    so to get around this, I just typed /OVS/iso_pool/Disc1.img
    and away we go, instlalation starts....
    However THe instlalation then asks for disc 2, and then I get stuck, how do I tell it, that I changed the disc, and its ready?
    I tried an installation of mounting the disc1 iso from a cd to /mnt
    and then wen it wanted disc 2 I dont umount -l /mnt ...it said inodes busy, so I just forced the eject CD physically ont he drive
    and then mount /dev/cdrom /mnt , for disc3
    but it didn't detect I changed a cd.
    Can anyone help me, its really urgent have to get this server up and running by today if possible.

  • Oracle SQL template to create re-usable DDL/DML Scripts for Oracle database

    Hi,
    I have a requirement to put together a Oracle SQL template to create re-usable DDL/DML Scripts for Oracle databases.
    Only the Oracle DBA will be running the scripts so permissions is not an issue.
    The workflow for any DDL is as follows:-
    1) New Table
    a. Check if the table exists from the system/admin views.
    b. If table exists then give message "Table Exists"
    c. If table does not exist then execute DDL code
    2) Add Column
    a. Check if Column exists for a given table from system/admin views
    b. If column exists in the specified table,
    b1. backup table.
    b2. alter table to make changes to the column
    b3. verify data or execute dml script convert from backup to the new change.
    c. If Column does not exist
    c1. backup table
    c2. alter table to add column
    c3. execute dml to populate column with default value.
    The DML scripts are for populating base tables with data required for business operations.
    3) Add new row
    a. check if row exists by comparing old values of each column with new values to be added for the new record.
    b. If exists, give message row exists
    c. If not exists, add new record.
    4) Update existing record (We have createtime columns in these tables so changes can be tracked)
    a. check if row exists using primary key.
    b. If exists,
    b1. deactivate the record using the "active" column of the table
    b2. Add new record with the changes required.
    c. If does not exist, add new record with the changes required.
    Could you please help with some ideas which can get this done accurately?
    I have tried several ways, but I am not able to put together something that fulfills all requirements.
    Thank you,

    First let me address your question. (This is the easy part.)
    1. The existence of tables can be found in DBA_TABLES. Query it and and then use conditional logic and execute immediate to process the DDL.
    2. The existence of table columns is found in DBA_TAB_COLUMNS. Query it and then conditionally execute your DDL. You can copy the "before picture" of the table using that same dba view, or even better, use DBMS_METADATA.
    As for your DML scripts, they should be restartable, reversible, and re-run-able. They should "fail gracefully" on error, be written in such a way that they can run twice in a row without creating duplicate changes.
    3. Adding appropriate constraints can prevent invalid duplicate rows. Also, you can usually add to the where clause so that the DML does only what it needs to do without even relying on the constraint (but the constraint is there as a safeguard). Look up the MERGE statement to learn how to do an UPSERT (update/insert), which will let you conditionally "deactivate" (update) or insert a record. Anything that you cannot do in SQL can be done with simple procedural code.
    Now, to the heart of the matter...
    You think I did not understand your requirements?
    Please be respectful of people's comments. Many of us are professionals with decades of experience working with databases and Oracle technology. We volunteer our valuable time and knowledge here for free. It is extremely common for someone to post what they feel is an easy SQL or PL/SQL question without stating the real goal--the business objective. Experienced people will spot that the "wrong question" has been asked, and then cut to the chase.
    We have some good questions for you. Not questions we need answers from, but questions you need to ask yourself and your team. You need to reexamine this post and deduce what those questions are. But I'll give you some hints: Why do you need to do what you are asking? And will this construct you are asking for even solve the root cause of your problems?
    Then ponder the following quotations about asking the right question:
    Good questions outrank easy answers.
    — Paul Samuelson
    The only interesting answers are those which destroy the questions.
    — Susan Sontag
    The scientific mind does not so much provide the right answers as ask the right questions.
    — Claude Levi-Strauss
    You can tell whether a man is clever by his answers. You can tell whether a man is wise by his questions.
    — Mahfouz Naguib
    One hears only those questions for which one is able to find answers.
    — Friedrich Nietzsche
    Be patient towards all that is unresolved in your heart and try to love the questions themselves.
    — Rainer Maria Rilke
    What people think of as the moment of discovery is really the discovery of the question.
    — Jonas Salk
    Judge a man by his questions rather than his answers.
    — Voltaire
    The ability to ask the right question is more than half the battle of finding the answer.
    — Thomas J. Watson

Maybe you are looking for