JDBC Lookup - Import table data from a different schema in same DB

Hi XI Experts,
We are facing an issue while importing a Database table into the external definition in PI 7.1.
The details are as below:
I have configured user 'A' in PI communication channel to access the database. But the table that I want to access is present in schema "B". Due to this, I am unable to view the table that I have to import in the list available.
In other words, I am trying to access a table present in a different schema in the same database. Please note that my user has been given all the required permissions to access different schema. Even then, I am unable to access the table in different schema.
Kindly provide your valuable suggestions as to how I can import table which is present in another schema but in the same Database.
Regards,
Subbu

If you are using PI 7.1, then you can do JDBC Lookup to import JDBC meta data (table structures from DB). Configure a jdbc receiver communication channel where you specify username and password which has permission to access schema A and Schema B of database. Specify database name in the connection string. Then you might have access to import both schema.
Please refer these links
SAP PI 7.1 Mapping Enhancements Series: Graphical Support for JDBC and RFC Lookups
How to use JDBC Lookup in PI 7.1 ?

Similar Messages

  • EXPORT/IMPORT TABLE DATA FROM ONE SCHEMA TO ANOTHER ONE (S.O.S)

    Hi,
    I urgently need your help:(, I have two different instances, in each of them there are two schemas, A and B; which tables are the same, but I need to transfer JUST the table data from A's tables to B's tables, how can I do so??
    Thanks in advance,
    Isabel

    The total number of tables is 989:(And how many do you want ? If 988, and if you're on a 10g db, you can use exclude parameter from expdp/impdp?
    Nicolas.

  • Character problem in toad import table data from excel

    Hi everybody,
    I want to import data from an excel file to an Oracle table, so I'm using Toad's "Import Table Data" tool for this purpose.
    The problem is Oracle doesn't import non-english characters properly.
    My database is XE and character set is 'AL32UTF8'.
    I searched web, but didn't find the solution.
    Please help...

    Hi again,
    Thank you for your reply Srini, but it is not about Toad.
    Character encoding of Excel causes the problem.
    I exported an unicode encoded csv file from excel and tried to load data from that file. It worked.
    Thanks.

  • Compare Table Data on 2 different databases having same schema

    I need to compare data in all the tables in 2 different databases having same schema.
    If there is any difference in data in any table on Database1 and Database2 then I need to update/insert/delete the rows in my table in Database2.
    So Database1 is my source database and Database2 is my sync database. I cannot use expdp tables as I am not having sufficient privileges to the database server.
    Also I cannot drop and recreate the tables as they are huge.
    Can anyone please guide me how to compare data and to write a script to comapre the changes in say Database1.Table1 and Database2.Table1 and then accordingly do inserts/updates/deleted on Database2.Table1?
    Thanks

    Karthick_Arp wrote:
    Do you have a DBLink? If youes you can do this.
    1. Login into the Database-2 and run this code.
    begin
    for i in (select table_name from user_tables)
    loop
    execute immediate 'truncate table ' || i.table_name;
    end loop;
    end;This will empty all the tables in your Database-2. Now what you need is to just populate the data from Database-1This might result in error, if any of the tables have referential integrity on them.
    From 10g documentation :
    Restrictions on Truncating Tables
    You cannot individually truncate a table that is part of a cluster. You must either truncate the cluster, delete all rows from the table, or drop and re-create the table.
    You cannot truncate the parent table of an enabled referential integrity constraint. You must disable the constraint before truncating the table. An exception is that you can truncate the table if the integrity constraint is self-referential.
    If a domain index is defined on table, then neither the index nor any index partitions can be marked IN_PROGRESS.I would go for normal MERGE. Also change the cursor to select table names by first modifying the child tables and then the parent table.

  • Importing table data from one schema to another schema

    Hi All,
    I exported tablerows only of Schema A, and same I am trying to imported in Schema B.
    While importing I am getting oracle error "row rejected, Integrity constraint violated
    parent key not found".
    What I did is I disabled all the constraints through script in Schema B, and imported data. Data imported successfully, but while executing the enabling the constraints script in Schema B constraints are not enabled due to parent and chile relationship problems, even I executed this script many times.
    Note:- Schema A and Schema B are same structure, Schema A contains data but
    Schema B does not
    Can any body have any idea on this?
    Thanks,

    Hi,
    But I want data to be completely imported without losing.
    I am trying to disable the constraints using the following queries for enabling and
    disabling
    select 'ALTER TABLE '||A.TABLE_NAME||' DISABLE CONSTRAINT '||B.CONSTRAINT_NAME||';'
    FROM user_constraints A, USER_CONSTRAINTS B
    WHERE A.TABLE_NAME = B.TABLE_NAME
    AND A.CONSTRAINT_TYPE = 'P'
    Foreign constraints  -
    select 'ALTER TABLE '||A.TABLE_NAME||' DISABLE CONSTRAINT '||B.CONSTRAINT_NAME||';'
    FROM user_constraints A, USER_CONSTRAINTS B
    WHERE A.TABLE_NAME = B.TABLE_NAME
    AND A.CONSTRAINT_TYPE = 'R'
    -- Check constraints ---
    select 'ALTER TABLE '||A.TABLE_NAME||' DISABLE CONSTRAINT '||B.CONSTRAINT_NAME||';'
    FROM user_constraints A, USER_CONSTRAINTS B
    WHERE A.TABLE_NAME = B.TABLE_NAME
    AND A.CONSTRAINT_TYPE = 'C'

  • MapViewer metadata problem - accessing spatial data in a different schema.

    I have a MapViewer application that uses data from three different schemas.
    1. Dynamic Themes come from schema A.
    2. Static Themes come from schema B.
    3. A newly added static theme in B whose data comes from schema C.
    The mapviewer datasource points to schema B where the static themes, data and metadata are defined while the dynamic themes have their own datasource specified as part of addJDBCTheme(...).
    To get the newly added map to work I've had to add a view in schema B that points to C instead of referencing directly the table and I've had to add the metadata twice, once for schema B and once for schema C.
    If I put the metadata in just one of the two schemas I get the following errors.
    08/11/21 13:58:57 ERROR [oracle.sdovis.ThemeTable] cannot find entry in ALL_SDO_GEOM_METADATA table for theme: AMBITOS_REST
    08/11/21 13:58:57 ERROR [oracle.sdovis.ThemeTable] java.sql.SQLException: Invalid column index
    OR
    08/11/21 13:53:39 ERROR [oracle.sdovis.theme.pgtp] java.sql.SQLException: ORA-29902: error in executing ODCIIndexStart() routine
    ORA-13203: failed to read USER_SDO_GEOM_METADATA view
    It's not a big deal but I'd like to know if anyone else has has similar problems.
    Saludos,
    Lew.
    Edited by: Lew2 on Nov 21, 2008 6:42 AM

    Hi Lew,
    if you are using a recent version (10.1.3.1 or later) there is no need to use a view and to create the metadata in both schemas.
    You need to grant selection on tables between the schemas.
    You can try the following. Assume you have the MVDEMO schema (from MapViewer kit) and SCOTT schema.
    1) grant select on MVDEMO Counties table to SCOTT
    SQL> grant select on counties to scott;
    2) Now you are ready to create a predefined theme in schema SCOTT using the MVDEMO Counties table.
    - Open MapBuilder and loads the SCOTT schema.
    - On the Data navigator (bottom left tree), go to Geometry tables and you should see the MVDEMO node and the COUNTIES node inside it.
    - Start a wizard to create a geometry theme based on this Counties table.
    - At the end you should see that the base table name is MVDEMO.COUNTIES. Therefore MapViewer will use the metadata in MVDEMO schema and there is no need to replicate it in SCOTT schema.
    Joao

  • How can i import tables from a different schema into the existing relational model... to add these tables in the existing model? plss help

    how can i import tables from a different schema into the existing relational model... to add these tables in the existing relational/logical model? plss help
    note; I already have the relational/logical model ready from one schema... and I need to add few more tables to this relational/logical model
    can I import the same way as I did previously??
    but even if I do the same how can I add it in the model?? as the logical model has already been engineered..
    please help ...
    thanks

    Hi,
    Before you start, you should probably take a backup copy of your design (the .dmd file and associated folder), in case the update does not work out as you had hoped.
    You need to use Import > Data Dictionary again, to start the Data Dictionary Import Wizard.
    In step 1 use a suitable database connection that can access the relevant table definitions.
    In step 2 select the schema (or schemas) to import.  The "Import to" field in the lower left part of the main panel allows you to select which existing Relational Model to import into (or to specify that a new Relational Model is to be created).
    In step 3 select the tables to import.  (Note that if there are an Foreign Key constraints between the new tables and any tables you had previously imported, you should also include the previous tables, otherwise the Foreign Key constraints will not be imported.)
    After the import itself has completed, the "Compare Models" dialog is displayed.  This shows the differences between the model being imported and the previous state of the model, and allows you to select which changes are to be applied.
    Just selecting the Merge button should apply all the additions and changes in the new import.
    Having updated your Relational Model, you can then update your Logical Model.  To do this you repeat the "Engineer to Logical Model".  This displays the "Engineer to Logical Model" dialog, which shows the changes which will be applied to the Logical Model, and allows you to select which changes are to be applied.
    Just selecting the Engineer button should apply all the additions and changes.
    I hope this helps you achieve what you want.
    David

  • Is it possible to show data from two different sql tables?

    Is it possible to show data from two different sql tables? Either to show combined data by using a join on a foreign key or showing a typical master detail view?
    I have one table With data about a house, and another table With URL's to images in the blob. Could these two be combined in the same Gallery?
    Best regards Terje F - Norway

    Hi Terje,
    If you have a unique key, you could use one of the following functions for your scenarios:
    If you only have one image per house, you can use LookUp:
    http://siena.blob.core.windows.net/beta/ProjectSienaBetaFunctionReference.html#_Toc373745501
    If you have multiple images per house, you can use Filter:
    http://siena.blob.core.windows.net/beta/ProjectSienaBetaFunctionReference.html#_Toc373745487
    Thanks
    Robin

  • Selecting data from two different tables.

    Do we need to use join two tables with primary/foreign key while trying to use select statement for getting data from those to table.? If no who can i go about do it.

    872959 wrote:
    If i am using From clause to get data from two different tables, is it necessary that both tables have column of identical data in them.In general, they ought to (or you need to join in a third table that tells you how to map rows from one table to rows of the other table).
    It is not strictly necessary that there be any join condition between tables. If you don't provide a join condition, Oracle has to do a Cartesian product. That means that if there are n rows in one table and m rows in the other, the result set will have n * m rows. It is very rarely a good idea to write queries that do Cartesian products but it does occasionally happen.
    Justin

  • Trying to use FTP to get data from a different server

    Hi Friends,
        I have to use FTP to get data from a different server and upload it on SAP server. Now my problem is when I m trying to do ftp through command line it brings the file but with no data.
       Through ABAP program nothing is happening.
    Here's my code--
      V_PASSWORD = 'test@123'.
      V_PWD_LEN = STRLEN( V_PASSWORD ).
      CALL FUNCTION 'HTTP_SCRAMBLE'
        EXPORTING
          SOURCE      = V_PASSWORD
          SOURCELEN   = V_PWD_LEN
          KEY         = CS_KEY_500098
        IMPORTING
          DESTINATION = V_PASSWORD.
      CALL FUNCTION 'FTP_CONNECT'
        EXPORTING
          USER            = 'test'
          PASSWORD        = V_PASSWORD
          HOST            = '176.0.1.6'
          RFC_DESTINATION = 'SAPFTPA'
        IMPORTING
          HANDLE          = MI_HANDLE
        EXCEPTIONS
          NOT_CONNECTED   = 1
          OTHERS          = 2.
      CHECK SY-SUBRC = 0.
      cmd = 'lcd d:\ftp'. .
      PERFORM FTP_COMMAND USING CMD.
      CMD = 'asc'.
      PERFORM FTP_COMMAND USING CMD.
      CONCATENATE 'dir' 'ftpt*' INTO CMD SEPARATED BY SPACE.
      PERFORM FTP_COMMAND USING CMD.
      cmd = 'ls'.
    concatenate 'ls' INTO CMD SEPARATED BY SPACE.
      PERFORM FTP_COMMAND USING CMD.
      cmd = 'mget trial.txt'.
    CONCATENATE 'mget' 'trial.txt' INTO CMD SEPARATED BY SPACE.
      CALL FUNCTION 'FTP_COMMAND'
        EXPORTING
          HANDLE        = MI_HANDLE
          COMMAND       = CMD
        TABLES
          DATA          = MTAB_DATA1
        EXCEPTIONS
          TCPIP_ERROR   = 1
          COMMAND_ERROR = 2
          DATA_ERROR    = 3
          OTHERS        = 4.
      IF SY-SUBRC = 0.
        LOOP AT MTAB_DATA1.
          WRITE: / MTAB_DATA1.
        ENDLOOP.
      ELSE.
        CONCATENATE 'Error in FTP Command while executing' CMD INTO ERROR SEPARATED BY SPACE.
        WRITE: / ERROR.
      ENDIF.

    Hi
    try this.....in one of my reqt, i done this successfully....
    FORM FTPCON.
    FTP-------------------------------------------------------*
      CLEAR DSTLEN.
      SET EXTENDED CHECK OFF.
      DSTLEN = STRLEN( S_PWD ).     -
    >  (S_PWD (password) is a selection screen field )                  
      CALL FUNCTION 'HTTP_SCRAMBLE'
        EXPORTING
          SOURCE      = S_PWD
          SOURCELEN   = DSTLEN
          KEY         = KEY
        IMPORTING
          DESTINATION = S_PWD.
      CALL FUNCTION 'FTP_CONNECT'
        EXPORTING
          USER            = P_USER                   -
    > Username
          PASSWORD        = S_PWD             -
    > password
          HOST            = P_HOST                  -
    > Host
          RFC_DESTINATION = P_DEST         -
    > Destination
        IMPORTING
          HANDLE          = HDL
        EXCEPTIONS
          NOT_CONNECTED   = 1
          OTHERS          = 2.
      IF SY-SUBRC <> 0.
        MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
                WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
      ENDIF.
      CALL FUNCTION 'FTP_COMMAND'
        EXPORTING
          HANDLE        = HDL
          COMMAND       = 'set passive on'
        TABLES
          DATA          = RESULT
        EXCEPTIONS
          TCPIP_ERROR   = 1
          COMMAND_ERROR = 2
          DATA_ERROR    = 3.
      CALL FUNCTION 'FTP_R3_TO_SERVER'
        EXPORTING
          HANDLE         = HDL
          FNAME          = G_FCNAME
          CHARACTER_MODE = 'X'
        TABLES
          TEXT           = T_FILE1
        EXCEPTIONS
          TCPIP_ERROR    = 1
          COMMAND_ERROR  = 2
          DATA_ERROR     = 3
          OTHERS         = 4.
      IF SY-SUBRC <> 0.
        MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
             WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
      ENDIF.
      CALL FUNCTION 'FTP_R3_TO_SERVER'
        EXPORTING
          HANDLE         = HDL
          FNAME          = G_FCNAME1
          CHARACTER_MODE = 'X'
        TABLES
          TEXT           = T_FILE2
        EXCEPTIONS
          TCPIP_ERROR    = 1
          COMMAND_ERROR  = 2
          DATA_ERROR     = 3
          OTHERS         = 4.
      IF SY-SUBRC <> 0.
        MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
             WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
      ENDIF.
      CALL FUNCTION 'FTP_DISCONNECT'
        EXPORTING
          HANDLE = HDL.
      CALL FUNCTION 'RFC_CONNECTION_CLOSE'
          EXPORTING
            DESTINATION          = P_DEST
          EXCEPTIONS
            DESTINATION_NOT_OPEN = 1
            OTHERS               = 2.
        IF SY-SUBRC <> 0.
          MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
                  WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
        ENDIF.
    ENDFORM.                    " FTPCON
    Hope it helps.....

  • Import some data from one oracle to another...

    Hi Guys,
    we have 2 oracle database servers on 2 different machines with schema.
    I want to import some data from a specific table from one oracle into another oracle.
    what is the way to do that.
    Please help in details
    Imran Baig

    Hi,
    Thanks for the reply.
    Tables are already created in both of the oralce databases only the data varies. I just have to import only few records from one oracle to another with the same user name and table already existing.
    I have tried using database link. I can view records from the other oracle database but as soon as i write an insert command oracle gets held. I cant do anything.
    Is there any other way?
    Imran

  • Cross reference data from 2 different ecc system.

    Hi Sdners,
    Iam working on a scenario where i have to get data from two different Ecc system,consolidate them and send it back to their respective system.
    But some refernce data in both the systems are different and when iam merging data from 2 system i have to maintain either of the reference data.But problem comes when i syndicate it back to ECC ,it cannot accept a new reference data.
    Please suggest me some answere how to proceed in such case.
    Its urgent.
    Points will be rewarded for Genuine answeres.
    Thanks in advance,
    Regards,
    Neethu.

    Hi,
    First enable keymapping property to YES  for the table which you want to do
    importing and syndicatig.
    Create two remote systems type inbound/outbound .
    Import the data from first remote system and map the corresponding fields.
    Don't forget to map the remotekey field which is on the destination side.Make clone
    of one of the dispaly field and map to the remote key field.
    After importing you can see the records from which remote system are imported
    using Edit Key Mappings option in DataManager.It shows that remotesystem
    name and corresponding remote key.
    Do the same for second remote system too.
    After merging data in data manager , you can see the merged record and see the
    two remote systems names and two remote keys by using Edit Key Mappings
    option so the merged record goes back to both remote systems when you syndicate
    the records.
    Syndicate the data from first remote system by selecting destination properties and
    output remote system property under map properties tab as your first remote
    system.
    Do the mapping for corresponding fields and don't forget to map the value field under
    remote key .Then MDM generates remote keys for only records belongs to your
    first remote system.You can see this in destination preview.It does n't genarate
    remote keys for second remote system.Then check the option Suppress records
    without key under map properties tab and execute the syndication.Finally we can
    see the accurate records.
    Do the same for second remote system too.
    Hope it helps
    Cheers
    Narendra

  • Need a script to import the data from flat file

    Hi Friends,
    Any one have any scripts to import the data from flat files into oracle database(Linux OS). I have to automate the script for every 30min to check any flat files in Incoming directory process them with out user interaction.
    Thanks.
    Srini

    Here is my init.ora file
    # $Header: init.ora 06-aug-98.10:24:40 atsukerm Exp $
    # Copyright (c) 1991, 1997, 1998 by Oracle Corporation
    # NAME
    # init.ora
    # FUNCTION
    # NOTES
    # MODIFIED
    # atsukerm 08/06/98 - fix for 8.1.
    # hpiao 06/05/97 - fix for 803
    # glavash 05/12/97 - add oracle_trace_enable comment
    # hpiao 04/22/97 - remove ifile=, events=, etc.
    # alingelb 09/19/94 - remove vms-specific stuff
    # dpawson 07/07/93 - add more comments regarded archive start
    # maporter 10/29/92 - Add vms_sga_use_gblpagfile=TRUE
    # jloaiza 03/07/92 - change ALPHA to BETA
    # danderso 02/26/92 - change db_block_cache_protect to dbblock_cache_p
    # ghallmar 02/03/92 - db_directory -> db_domain
    # maporter 01/12/92 - merge changes from branch 1.8.308.1
    # maporter 12/21/91 - bug 76493: Add control_files parameter
    # wbridge 12/03/91 - use of %c in archive format is discouraged
    # ghallmar 12/02/91 - add global_names=true, db_directory=us.acme.com
    # thayes 11/27/91 - Change default for cache_clone
    # jloaiza 08/13/91 - merge changes from branch 1.7.100.1
    # jloaiza 07/31/91 - add debug stuff
    # rlim 04/29/91 - removal of char_is_varchar2
    # Bridge 03/12/91 - log_allocation no longer exists
    # Wijaya 02/05/91 - remove obsolete parameters
    # Example INIT.ORA file
    # This file is provided by Oracle Corporation to help you customize
    # your RDBMS installation for your site. Important system parameters
    # are discussed, and example settings given.
    # Some parameter settings are generic to any size installation.
    # For parameters that require different values in different size
    # installations, three scenarios have been provided: SMALL, MEDIUM
    # and LARGE. Any parameter that needs to be tuned according to
    # installation size will have three settings, each one commented
    # according to installation size.
    # Use the following table to approximate the SGA size needed for the
    # three scenarious provided in this file:
    # -------Installation/Database Size------
    # SMALL MEDIUM LARGE
    # Block 2K 4500K 6800K 17000K
    # Size 4K 5500K 8800K 21000K
    # To set up a database that multiple instances will be using, place
    # all instance-specific parameters in one file, and then have all
    # of these files point to a master file using the IFILE command.
    # This way, when you change a public
    # parameter, it will automatically change on all instances. This is
    # necessary, since all instances must run with the same value for many
    # parameters. For example, if you choose to use private rollback segments,
    # these must be specified in different files, but since all gc_*
    # parameters must be the same on all instances, they should be in one file.
    # INSTRUCTIONS: Edit this file and the other INIT files it calls for
    # your site, either by using the values provided here or by providing
    # your own. Then place an IFILE= line into each instance-specific
    # INIT file that points at this file.
    # NOTE: Parameter values suggested in this file are based on conservative
    # estimates for computer memory availability. You should adjust values upward
    # for modern machines.
    # You may also consider using Database Configuration Assistant tool (DBCA)
    # to create INIT file and to size your initial set of tablespaces based
    # on the user input.
    # replace DEFAULT with your database name
    db_name=DEFAULT
    db_files = 80 # SMALL
    # db_files = 400 # MEDIUM
    # db_files = 1500 # LARGE
    db_file_multiblock_read_count = 8 # SMALL
    # db_file_multiblock_read_count = 16 # MEDIUM
    # db_file_multiblock_read_count = 32 # LARGE
    db_block_buffers = 100 # SMALL
    # db_block_buffers = 550 # MEDIUM
    # db_block_buffers = 3200 # LARGE
    shared_pool_size = 3500000 # SMALL
    # shared_pool_size = 5000000 # MEDIUM
    # shared_pool_size = 9000000 # LARGE
    log_checkpoint_interval = 10000
    processes = 50 # SMALL
    # processes = 100 # MEDIUM
    # processes = 200 # LARGE
    parallel_max_servers = 5 # SMALL
    # parallel_max_servers = 4 x (number of CPUs) # MEDIUM
    # parallel_max_servers = 4 x (number of CPUs) # LARGE
    log_buffer = 32768 # SMALL
    # log_buffer = 32768 # MEDIUM
    # log_buffer = 163840 # LARGE
    # audit_trail = true # if you want auditing
    # timed_statistics = true # if you want timed statistics
    max_dump_file_size = 10240 # limit trace file size to 5 Meg each
    # Uncommenting the line below will cause automatic archiving if archiving has
    # been enabled using ALTER DATABASE ARCHIVELOG.
    # log_archive_start = true
    # log_archive_dest = disk$rdbms:[oracle.archive]
    # log_archive_format = "T%TS%S.ARC"
    # If using private rollback segments, place lines of the following
    # form in each of your instance-specific init.ora files:
    # rollback_segments = (name1, name2)
    # If using public rollback segments, define how many
    # rollback segments each instance will pick up, using the formula
    # # of rollback segments = transactions / transactions_per_rollback_segment
    # In this example each instance will grab 40/5 = 8:
    # transactions = 40
    # transactions_per_rollback_segment = 5
    # Global Naming -- enforce that a dblink has same name as the db it connects to
    global_names = TRUE
    # Edit and uncomment the following line to provide the suffix that will be
    # appended to the db_name parameter (separated with a dot) and stored as the
    # global database name when a database is created. If your site uses
    # Internet Domain names for e-mail, then the part of your e-mail address after
    # the '@' is a good candidate for this parameter value.
    # db_domain = us.acme.com      # global database name is db_name.db_domain
    # FOR DEVELOPMENT ONLY, ALWAYS TRY TO USE SYSTEM BACKING STORE
    # vms_sga_use_gblpagfil = TRUE
    # FOR BETA RELEASE ONLY. Enable debugging modes. Note that these can
    # adversely affect performance. On some non-VMS ports the db_block_cache_*
    # debugging modes have a severe effect on performance.
    #_db_block_cache_protect = true # memory protect buffers
    #event = "10210 trace name context forever, level 2" # data block checking
    #event = "10211 trace name context forever, level 2" # index block checking
    #event = "10235 trace name context forever, level 1" # memory heap checking
    #event = "10049 trace name context forever, level 2" # memory protect cursors
    # define parallel server (multi-instance) parameters
    #ifile = ora_system:initps.ora
    # define two control files by default
    control_files = (ora_control1, ora_control2)
    # Uncomment the following line if you wish to enable the Oracle Trace product
    # to trace server activity. This enables scheduling of server collections
    # from the Oracle Enterprise Manager Console.
    # Also, if the oracle_trace_collection_name parameter is non-null,
    # every session will write to the named collection, as well as enabling you
    # to schedule future collections from the console.
    # oracle_trace_enable = TRUE
    # Uncomment the following line, if you want to use some of the new 8.1
    # features. Please remember that using them may require some downgrade
    # actions if you later decide to move back to 8.0.
    #compatible = 8.1.0
    Thanks.
    Srini

  • APEX Application accessing data from two different databases

    Hi All,
    Currently as we all know that APEX Application resides in database and is connected to the schema of that database.
    I want APEX Application to be running and accessing data from two different databases. Elaborating my question,
    Currently, my APEX Production Application is connected with XXXX Schema of DB1 Database(Where APEX Resides). Now I want to add some pages into this APEX Application for REPORT Purpose, But I want to connect this REPORT APEX Pages to get data from Different Schema YYYY for Database DB2.
    Is it possible to configure this scenario?
    The reason for doing this is to avoid the REPORT related (adhoc queries) resource utilization effect on Production DB1 Database.
    Thanks
    Nil

    1. If you do the joining of two or more tables in DB1 then all data is pulled over to DB1 and then the join is executed: so more data over the databaselink and more work for DB1. Better keep the joining stuff where the data resides and just pull exactly that data over that you need.
    2. Don't know about your different block sizes. Seems a nice question for one of the other forums (DBA or SQL).
    3. I mean create synonyms on DB1 for reports VIEWS in DB2.
    Hope all is clear!

  • Import xml data from a variable locations

    Hi,
    Could some please help me with a solution?
    Here's what currently runs on our server. An applications runs and does:
    Create an unique folder
    Copy a fillable, reader extended form into this folder
    Put data in a XML-file (always same filename) and places that in the same folder
    Open the PDF for the user to fill in the details.
    Now, the change I want to make is to import the data from the XML file into the PDF. I used a button on the form using xfa.host.importData(). This fires up a browse-for-file-box so the user can select the correct xml-file. This works!
    However, as the folder from where the PDF opens, is always a different one, the browse-for-file-box opens in the wrong folder. The user does see a XML-file, but that is from a different folder. So, what I wanted is the following:
    The form opens
    The PDF imports the data from the XML file automaticly, without user intervention
    The user finalizes the form and saves it.
    In short: How can I create a variable from the folderlocation where the pdf is opened, and merge that with the xml-filename?
    Hopefully I than can use xfa.host.importData("filename.xml")
    I hope someone can advise me! Thanks in advance!

    Hi,
    thanks for your responce. It sure was helpfull to have a look at that discussion. However, that article regards a defined source with a fixed path (xfa.host.importData("/c/path/to/xml/file.xml");)
    There is my problem. I need to determine the path first since that changes every time. Then I want to import that file using code.
    I hope my question is clear. If not, please let me know!
    Regards,
    Erik
    By the way, I also tried opening the same pdf from different folders. Adobe Reader seems to remember where it looked previous for a XML-file. So,if there is a way to set the folder where the browse-for-file-box opens with, that would be great too!

Maybe you are looking for