Tracing remote queries

Hi
I have two databases, X and Y. A database link exists between X
and Y. I have views in X which refer to tables in Y. I have some
local tables in X as well.
I create a query which refers to some local (smaller) tables in
X and some remote (larger) tables in Y - I use DRIVING_SITE to
make sure the query runs on the Y database for improved
performance rather than dragging the large volumes over the
network link.
Unfortunately, the side effect is that when I do SQL_TRACE=TRUE
on X, the trace files generated only show the work done on X -
i.e. not very much - I have no idea how much/what kind of effort
is done on Y other than the elapsed time of the whole query.
Anybody got any ideas on an approach to this ?
I could take the query from X and run it on the Y database
(making sure it runs with the same kind of plan) and trace it
there but I wondered if someone had a better ideas.
It would be nice if you could temporarily turn tracing on for a
user on the Y database...that way whenever the query executes
across the DB link using the said user, a trace file is
generated.
Thanks in advance.
Jeff

I found that you can use...
sys.dbms_system.set_sql_trace_in_session(&sid,&serial,&status);
to deal with this.
Issue closed.

Similar Messages

  • Heavy remote queries runs forever

    Hi everybody,
    I'm usign SQL Developer and SQL Tool to execute remote queries on a Oracle Database. I think i have some problem with my pc 'cause when I try to exceute heavy queries on large tables the query starts but never ends. When I try to stop the query i get the following error message "ORA-12152: TNS:unable to send break message" (Cause: Unable to send break message. Connection probably disconnected). When my associates try to perform same queries, they manage to get results.
    May anyone tell me if there's a problem with my PC (i.e.: tcp/ip configuration).
    I think I have the same problem when I try to access BPEL Console: sometimes, i don'tmanage to access a tab and i think the problem is that DB connection gets down (time-out problem).
    Thanks for your help!

    Hello,
    You might already know about the error text, I recommend your to look on metalink around this error. If your co-worker able to execute and get results successfully, then its your connection might be handled differently, and get your network admin/dba involved.
    Error: ORA 12152
    Text: TNS:unable to send break message
    Cause: Unable to send break message. Connection probably disconnected.
    Action: Re-establish connection. If the error is persistent, turn
    n tracing and reexecute the operation.
    Something you can try in agreement with your DBA and network admmin
    Add SQLNET.EXPIRE_TIME to the database sqlnet.ora file. This is the file located $ORACLE_HOME/network/admin directory or the directory specified by
    The EXPIRE_TIME should be set to the highest value possible that is within the firewall idle connection timeout, to limit the amount of network traffic generated. For example if the firewall timeout is 20 minutes set:
    SQLNET.EXPIRE_TIME=15
    Regards

  • Spatial Tables and Remote Queries

    Hi Experts,
    Can anyone explain me when and where to use Spatial Tables and Remote Queries with some examples.
    Thanks in advance.

    Hi,
    there are some nice demos from Andrejus:
    http://andrejusb.blogspot.com/search/label/Spatial
    Hope it helps,
    Friedhold

  • Remote queries using Perl

    Hi,
    I'm new to this forum, so apologies if this is not the right place to post.
    I have a Perl script which needs to perform some simple queries on a remote ORACLE 10g DB. The script must run on a Linux machine.
    Could you please tell which ORACLE components I need to install on the Linux machine? Would SQL*Net be enough?
    Thanks
    Gianni

    It depends on how you want to connect to Oracle. There are basically 2 ways:
    1) Run sqlplus through Perl and collect the output. Similar to how it is done in shell script. This is a shell script (uncomplete):
    sqlplus -s << EOF
    +\/ as sysdba+
    clear columns
    set linesize 10000
    set pause off
    set verify off
    set trimspool on
    set pages 0
    set feedback off
    select member from v\$logfile;
    exit
    EOF
    The Perl equivalent of the above shell script would be:
    +#####+
    +# Declaration+
    +#####+
    my $connect = qq("\/ as sysdba");
    my $sql_extra1 = "clear columns
    set linesize 10000
    set pause off
    set verify off
    set trimspool on
    set pages 0
    set feedback off\n";
    +#####+
    +# Create the SQL command file+
    +#####+
    open (SQL,">","/usr/tmp/sqlplus_tmp.log") or die "cannot open file\n";
    print SQL $sql_extra1;
    print SQL "select member from v\$logfile;\n";
    print SQL "exit";
    close (SQL);
    +#####+
    +# Run the command and capture the output. qx() is equivalent to backticks: ``+
    +#####+
    +@sqlplus_output = qx(sqlplus -s $connect \@/usr/tmp/sqlplus_tmp.log);+
    print "Output =====>\n";
    +foreach (@sqlplus_output){+
    chomp;
    print "$_\n";
    +}+
    In this case you will need sqlplus installed on the local or remote server. You can choose to run the script on the remote server, or run it locally but connect through listener to the remote database.
    2) Use DBD::Oracle - Oracle database driver for the DBI module. This is a CPAN module and for this you do not use sqlplus. The advantage with this method is that you have the result set already in a Perl variable. With previous sqlplus method you always have to parse the resulting set as the output is unformatted text.
    The Perl script would be:
    use DBI;
    +$dbh = DBI->connect("dbi:Oracle:$dbname", $user, $passwd);+
    my $SEL = "select member from v\$logfile;";
    my $sth = $db->prepare($SEL);
    +$sth->execute();+
    +@data = $sth->fetchrow_array());+
    For more information on Perl for DBA's see my Perl presentation given at Open World: http://www.dbvisit.com/docs/Perl_A_DBA_and_Developers_best_friend.pdf
    Kind regards, Arjen

  • How do I enable "TRACING" of queries and connects ?

    As far as I know I can somewhere enable tracing of
    all incoming connect (=login session) trials and queries.
    How can I turn this tracing on?
    Where can I see the tracing log ?

    To enable trace for another session currently connected:
    EXECUTE SYS.DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(<sid>, <serial#>, TRUE );
    To disable trace for another session currently connected:
    EXECUTE SYS.DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(<sid>, <serial#>, FALSE );
    To enable trace for you own, current session:
    alter session set events '10046 trace name context forever, level 12';
    To disable trace for your own current session:
    alter session set events '10046 trace name context off';
    To see what your current session trace dump file is:
    declare
      spid1     VARCHAR2(9);
      tracefile VARCHAR2(80);
      sid_id    VARCHAR2(8);
      udump_loc VARCHAR2(60);
    begin
      SELECT spid
        INTO spid1
        FROM v$process a,
             v$session b,
             v$mystat  m
       WHERE a.addr = b.paddr
         AND b.audsid = userenv('SESSIONID')
         AND b.sid    = m.sid
         AND ROWNUM = 1 ;
      SELECT LOWER(instance_name)
        INTO sid_id
        FROM v$instance ;
      SELECT value
        INTO udump_loc
        FROM v$parameter
       WHERE name = 'user_dump_dest' ;
      tracefile:= udump_loc||'/'||sid_id||'_ora_' || spid1 ||'.trc';
      dbms_output.enable(1000000);
      dbms_output.put_line(tracefile);
    end;

  • Tracing SQL Queries from an Application.

    I read the info on here.
    http://www.psoug.org/reference/traceanalyzer.html_
    I have been able to use this guide to analyze traces.
    I installed DBMS_SUPPORT package on my XE database. I need to know, how can I run a trace from my front-end application and get it analyzed, using trace analyzer.
    I have got trace files, by using DBMS_SUpport. I have been successful in running tkprof on them.
    I need to trap the values of the bind variables that are being sent by the front-end application.
    Any suggestions would be welcome.
    Thanks.

    End to End Application Tracing can identify the source of an excessive workload, such as a high load SQL statement, by client identifier, service, module, action, session, instance, or an entire database. This isolates the problem to a specific user, service, session, or application component.
    Oracle provides the trcsess command-line utility that consolidates tracing information based on specific criteria.
    The SQL Trace facility and TKPROF are two basic performance diagnostic tools that can help you monitor applications running against the Oracle Server.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/sqltrace.htm#i19083

  • Force Operation on Remote Database

    I have a 3 part UNION ALL query, each part of which runs against a different DB via DB links (the 1st part runs against the local DB).
    When I run each part that accesses the remote DB, the query runs totally remotely and runs in under 5 seconds.
    However, when I combine them into one query, I see REMOTE inside of a nested loop structure, and the query takes around 20 minutes. No part of the query depends on anything on the local machine; I've had that reviewed. (I also have about 20 other queries with the same structure that generate correct explain plans).
    I have verified that the subquery can and should run totally against the remote DB.
    Is there any way to force the query to be executed remotely?
    The portion of the explain plan looks like this:
    SORT GROUP BY
    ..nested loops
    ....nested loops
    ......remote
    ......remote
    ....remote
    ..remote

    Tried driving_site with the following results:
    1. Local query and 1 remote query with driving_site - works fine!
    2. Local query and 2 remote queries each with driving_site - driving site is ignored, back to nested loops
    3. Local query 1 remote query with driving site and 1 remote query without driving site - get
    ORA-22992: cannot use LOB locators selected from remote tables
    ORA-02063: preceding line from PROD
    ORA-02063: preceding 2 lines from PDSPROD_B
    Note that there is a CLOB in one of the tables, but I am not using that column in the query.
    The subqueries look like this:
    SELECT /*+ driving_site (c2 ) */ c2.company_name, COUNT (b.customer_id), c2.company_id,
    c2.super_domain_info_id, sd2.brand_full_name, &&cl_id_3 AS CLUSTER_ID
    FROM v_user_info&&cluster_link_3 b,
    company_info&&cluster_link_3 c2,
    user_audit_info&&cluster_link_3 u,
    super_domain_info&&cluster_link_3 sd2
    WHERE b.company_id = c2.company_id
    AND b.customer_id = u.customer_id
    AND c2.super_domain_info_id = sd2.super_domain_info_id
    AND b.account_type = 0
    AND b.active_in_company = 'Y'
    AND b.company_id NOT IN
    (5982737, 5982906, 5982712, 5982736, 5982908, 5977782, 11,
    2085412, 2134043, 5874338, 2, 5979364, 5979363, 6, 5980943,
    5981023, 12, 5978042, 5922312, 5979201, 5977961, 5982603,
    5978602, 5978001, 5981903, 5981423, 5978523, 5879286, 5978284,
    5977941, 5977661, 5977801, 5982185, 200, 5978524, 5982603,
    5979562, 5979963)
    AND u.first_login_time <> TO_DATE ('01/01/1900', 'MM/DD/YYYY')
    AND c2.current_membership_id IN (
    SELECT cm.ID
    FROM cluster_membership&&cluster_link_3 cm
    WHERE cm.CLUSTER_ID = &&cl_id_3)
    GROUP BY c2.company_name,
    c2.company_id,
    c2.super_domain_info_id,
    sd2.brand_full_name

  • Its a remote question sorry!

    Hi.  Ha BT Vision since April last year and the Remote does not work down some of the buttons down the left hand side.  Cannot use volume and mode.  BT engineer said he didnt know why its not working, he has seen it twice, excellent customer service eh?
    Can anyone help?  I have looked through loads of the forum remote queries and many are saying hold the volume + button and mute.  Erm, I can't see a mute button on this remote (am I going blind?)
    If anyone can help I would really appreciate it!!!!

    Are you trying to get the BTV remote to operate your telly? The instructions are here. The Mute button is the one to the left of Back (a loudspeaker symbol with an X).
    You can click the white star next to this message if you think it was helpful.

  • JCO3: NullPointerException unregistering server with custom repository

    Hello,
    I have a JCO 2 project running on a J2EE server that we ported over to JCO3, and the ultimate issue is that when I
    stop the JCoServer in the application, it's throwing a null pointer exception.  Seemingly because the JCoServers
    are in a higher classloader, they are still there on restart, and then I get the NullPointerException trying to
    recreate the server as well.  I very well may be doing something totally wrong here...
    I'm creating the Data Provider, Repository, and Server as follows:
    ========================================================
    // This is a simple implementation of ServerDataProvider
    markviewDataProvider = new MarkviewServerDataProvider();
    if (!Environment.isServerDataProviderRegistered()){
      log.debug("Registering data provider");
      Environment.registerServerDataProvider(markviewDataProvider);
    else{
      log.error("Data provider already registered.");
      throw new IllegalStateException("Data provider already registered.  Please restart application server.");
    // Create the neccessary properties and assign to data provider
    // Their values are being read from a database
    serverProperties.setProperty(ServerDataProvider.JCO_GWSERV, rfcServerInfo.getGatewayServer());
    serverProperties.setProperty(ServerDataProvider.JCO_GWHOST, rfcServerInfo.getGatewayHost());
    serverProperties.setProperty(ServerDataProvider.JCO_PROGID, rfcServerInfo.getProgramId());
    serverProperties.setProperty(ServerDataProvider.JCO_CONNECTION_COUNT, maxConnections.toString());
    //XXX: We have to get this to work or else the server will not restart correctly
    //More on this later.
    //serverProperties.setProperty(ServerDataProvider.JCO_REP_DEST, "custom destination");
    markviewDataProvider.setServerProperties(serverName, serverProperties);
    // Get back the configured server from the factory.
    JCoServer server;
    try
      server = JCoServerFactory.getServer(serverName);
    catch(JCoException ex)
      throw new RuntimeException("Unable to create the server " + serverName
      + ", because of " + ex.getMessage(), ex);
    // rfcRepository is a singleton that contains
    // repository = JCo.createCustomRepository(name);  
    // and methods to add structures to the repository            
    server.setRepository(rfcRepository.exposeRepository());
    // add in the handlers, start the server
    ========================================================
    So...the first issue is this: if I set ServerDataProvider.JCO_REP_DEST to anything, the program won't run because
    it looks for the repository host connection info in a text file with that name, just like the example code does. 
    According to the javadoc, custom repositories don't have a destination, so it seems this property should be NULL. 
    Yet if I leave that property null, it works fine...until I need to stop the server (or, I get the same error when
    the above code runs again on restart, as the JCoServerFactory.getServer code runs in both places - tries to refresh
    a server that survived the restart as it failed to shut down due to the error).
    For completeness, the shutdown code essentially reads:
    =========================================================
    JCoServer server = JCoServerFactory.getServer(serverName);
    server.stop();
    =========================================================
    Then what happens is this:
    java.lang.NullPointerException
            at com.sap.conn.jco.rt.StandaloneServerFactory.compareServerCfg(StandaloneServerFactory.java:91)
            at com.sap.conn.jco.rt.StandaloneServerFactory.getServerInstance(StandaloneServerFactory.java:116)
            at com.sap.conn.jco.server.JCoServerFactory.getServer(JCoServerFactory.java:56)
            at com.markview.integrations.sap.gateway.sap2mv.JCoServerController.destroy(JCoServerController.java:310)
    I decompiled and debugged StandaloneServerFactory and I'm seeing this in compareServerCfg:
    =========================================================
            key = "jco.server.repository_destination";
            if(!current.getProperty(key).equals(updated.getProperty(key)))
                return CompareResult.CHANGED_RESTART;
            } else
                return CompareResult.CHANGED_UPDATE;
    =========================================================
    Clearly, if getProperty() returns null, the .equals is going to raise NullPointerException, so this appears to be
    the cause of that error.  In other words, if this code runs, I'm going to have this problem because the property
    needs to (seemingly) be null for my program to run at all.
    Any insights on how to fix this / work around this would be greatly appreciated!
    Thanks in advance,
    Corey

    <div style="width:58em;text-align:left">
    Whoa, a long posting and I'm not sure I read it thoroughly enough to fully understand your problem - so please be patient with me if any comments are silly...
    Your RFC server program provides an implementation for your own ServerDataProvider. If you reference a repository destination the default DestinationDataProvider implemented will look for a file name with the repository and extension .jcoDestination. If you don't like this behavior you simply need to provide your own implementation and register it, same as you did for the server data.
    According to the javadoc, custom repositories don't have a destination, so it seems this property should be NULL.
    Not sure where you found this. I'm assuming that you're talking about com.sap.conn.jco.JCoCustomRepository, which can of course be backed by a destination, i.e. see the comments in method setDestination:
    <div style="text-align:left">Set the destination for the remote queries.
    If the requested meta data objects are not available in the repository cache, i.e. not set by the application, the remote query to the specified destination will be started. Default value is null meaning the remote queries are not allowed.
    Note: As soon the destination is provided, the repository can contain mixed meta data - set statically by the application and requested dynamically from the ABAP backend.</div>
    I usually prefer using a connection to a backend providing the meta data instead of hard coding the meta data insertion into a custom repository. As the data is cached this introduces an acceptable overhead for retrieving meta data for long running server programs.
    Anyhow, when you create your custom repository via JCo.createCustomRepository(java.lang.String name) you obviously must give it a name. That's the name I'd expect in the server data as a reference for your repository (property ServerDataProvider.JCO_REP_DEST). However, if your custom repository is not backed by a complete online repository you obviously have to ensure that all meta data is already provided.
    So for testing it might be interesting to see if adding a JCoDestination to your custom repository helps. Another option might be to enable tracing and see what goes wrong...
    Cheers, harald
    </div>

  • Oracle Sql Query issue Running on Different DB Version

    Hello All,
    I have come into situation where we are pruning sql queries on different DB version of Oracle and have performance issue. Let me tell you in brief and i really appreciate for your prompt response as its very imperative stuff.
    I have a query which is running on a DB of version 7.3.4 and it takes around 30 mins where as the same query when run on 8i it takes 15sec., its a huge difference. I have run the statistics to analyze on 7.3 and its comparatively very high. Question here is, the sql query trys to select data from same schema table and 2 tables from another DB using DB link and 2 other tables from another DB using DB link.So,how can we optimize this stuff and achieve this run as same time as 8i DB in 7.3. Hope i am clear about my question, Eagerly waiting for your replies.
    Thanks in Advance.
    Message was edited by:
    Ram8

    Difficult to be sure without any more detailed information, but I suspect that O7 is in effect copying the remote tables to local temp space, then joining; 8i is factoring out a better query to send to the remote DBs, which does as much work as possible on the remote DB before shipping remaining rows back to local.
    You should be able to use EXPLAIN PLAN to identify what SQL is being shipped to the remote DB, If you can't (and it's been quite a while since I tried DB links or O7) then get the remote DBs to yourself, and set SQL_TRACE on for the remote instances. Execute the query and then examine the remote trace files, And don't forget to turn off the tracing when you're done.
    Of course it could just be that the CBO got better,,,
    HTH - if not, post your query and plans for the local db, and the remote queries.
    Regards Nigel

  • Performance problem with MERGE statement

    Version : 11.1.0.7.0
    I have an insert statement like following which is taking less than 2 secs to complete and inserts around 4000 rows:
    INSERT INTO sch.tab1
              (c1,c2,c3)
    SELECT c1,c2,c3
       FROM sch1.tab1@dblink
      WHERE c1 IN (SELECT c1 FROM sch1.tab2@dblink);I wanted to change it to a MERGE statement just to avoid duplicate data. I changed it to following :
    MERGE INTO sch.tab1 t1
    USING (SELECT c1,c2,c3
       FROM sch1.tab1@dblink
      WHERE c1 IN (SELECT c1 FROM sch1.tab2@dblink) t2
    ON (t1.c1 = t2.c1)
    WHEN NOT MATCHED THEN
    INSERT (t1.c1,t1.c2,t1.c3)
    VALUES (t2.c1,t2.c2,t2.c3);The MERGE statement is taking more than 2 mins (and I stopped the execution after that). I removed the WHERE clause subquery inside the subquery of the USING section and it executed in 1 sec.
    If I execute the same select statement with the WHERE clause outside the MERGE statement, it takes just 1 sec to return the data.
    Is there any known issue with MERGE statement while implementing using above scenario?

    riedelme wrote:
    Are your join columns indexed?
    Yes, the join columns are indexed.
    You are doing a remote query inside the merge; remote queries can slow things down. Do you have to select all thr rows from the remote table? What if you copied them locally using a materialized view?Yes, I agree that remote queries will slow things down. But the same is not happening while select, insert and pl/sql. It happens only when we are using MERGE. I have to test what happens if we use a subquery refering to a local table or materialized view. Even if it works, I think there is still a problem with MERGE in case of remote subqueries (atleast till I test local queries). I wish some one can test similar scenarios so that we can know whether it is a genuine problem or some specific problem from my side.
    >
    BTW, I haven't had great luck with MERGE either :(. Last time I tried to use it I found it faster to use a loop with insert/update logic.
    Edited by: riedelme on Jul 28, 2009 12:12 PM:) I used the same to overcome this situation. I think MERGE needs to be still improved functionally from Oracle side. I personally feel that it is one of the robust features to grace SQL or PL/SQL.

  • Current Security Context Not Trusted When Using Linked Server From ABAP

    Hello,
    I am experiencing a head-scratcher of a problem when trying to use a Linked Server connection to query a remote SQL Server database from our R/3 system.  We have had this working just fine for some time, but after migrating to new hardware and upgrading OS, DBMS, and R/3, now we are running into problems.
    The target database is a named instance on SQL Server 2000 SP3, Windows 2000 Server.  The original source R/3 system was 4.7x2.00, also on SQL Server 2000 (SP4), Windows 2000 Server.  I had been using a Linked Server defined via SQL Enterprise Manager (actually defined when the source was on SQL Server 7), which called an alias defined with the Client Network Utility that pointed to the remote named instance.  This alias and Linked Server worked great for several years.
    Now we have migrated our R/3 system onto new hardware, running Windows Server 2003 SP1 and SQL Server 2005 SP1.  The application itself has been upgraded to ECC 6.0.  I performed the migration with a homogeneous system copy, and everything has worked just fine.  I redefined the Linked Server on the new SQL 2005 installation, this time avoiding the alias and referencing the remote named instance directly, and it tests out just fine using queries from SQL Management Studio.  It also tests fine with OSQL called from the R/3 server console, both when logged on as SAPServiceSID with a trusted connection, and with a SQL login as the schema owner (i.e., 'sid' in lowercase).  From outside of R/3, I cannot make it fail.  It works perfectly.
    That all changes when I try to use the Linked Server within an ABAP application, however.  The basic code in use is
    EXEC SQL.
       SET XACT_ABORT ON
       DELETE FROM [SERVER\INSTANCE].DATABASE.dbo.TABLE
    ENDEXEC.
    The only thing different about this code from that before the upgrade/migration is the reference to [SERVER\INSTANCE] which previously used the alias of just SERVER.
    The program short dumps with runtime error DBIF_DSQL2_SQL_ERROR, exception CX_SY_NATIVE_SQL_ERROR.  The database error code is 15274, and the error text is "Access to the remote server is denied because the current security context is not trusted."
    I have set the "trustworthy" property on the R/3 database, I have ensured SAPServiceSID is a member of the sysadmin SQL role, I've even made it a member of the local Administrators group on both source and target servers, and I've done the same with the SQL Server service account (it uses a domain account).  I have configured the Distributed Transaction Coordinator on the source (Win2003) system per Microsoft KB 839279 (this fixed problems with remote queries coming the other way from the SQL2000 system), and I've upgraded the system stored procedures on the target (SQL2000) system according to MS KB 906954.  I also tried making the schema user a member of the sysadmin role, but naturally that was disastrous, resulting in an instant R/3 crash (don't try this in production!), so I set it back the way it was (default).
    What's really strange is no matter how I try this from outside the R/3 system, it works perfectly, but from within R/3 it does not.  A search of SAP Notes, SDN forums, SAPFANS, Microsoft's KnowledgeBase, and MSDN Forums has not yielded quite the same problem (although that did lead me to learning about the "trustworthy" database property).
    Any insight someone could offer on this thorny problem would be most appreciated.
    Best regards,
    Matt

    Good news! We have got it to work. However, we did it in something of
    a backwards way, and I'm sure you'll laugh when you see how it was done. Also, the solution depends upon the fact that the remote server is still using SQL Server 2000, and so doesn't have quite so many restrictions placed upon it for distributed transactions and Linked Servers as SQL Server 2005 now does.
    At the heart of the solution is the fact that the Linked Server coming FROM the remote server TO our SAP system works fine. Finally, coupled with the knowledge that using DBCON on the SAP side to the remote server also does actually provide a connection (see Notes 323151 and 738371), we set up a roundabout way of achieving our goal. In essence, from ABAP, we set up the DBCON connection to the remote server, at which point all the Native SQL commands execute in the context of the remote server. From within that connection, we
    reference the tables in SAP via the Linked Server defined on the remote
    server, as if SAP were the remote server, selecting data from SAP and inserting it into the remote (but apparently local to this connection) tables.
    So, to spell it out, we define a Linked Server on the remote server pointing back to the SAP server as SAPSERV, with a SQL login mapping defined on the remote system pointing back to a SQL login in the SAP database. We also define a connection to the remote server from SAP using DBCON, using that remote SQL login for authentication.
    Then, in our ABAP code, we simply do something along the lines of
    exec sql.
       set connection 'REMOTE'
    endexec.
    exec sql.
       connect to 'REMOTE'
    endexec.
    exec sql.
       insert into REMOTE_TABLE
          select * from SAPSERV.SID.sid.SAP_TABLE
    endexec.
    exec sql.
       commit
    endexec.
    exec sql.
       disconnect 'REMOTE'
    endexec.
    This is, of course, a test program, but it demonstrated that it worked,
    and we were able to see that entries were appropriately deleted and inserted in the remote server's table. The actual program for use is a little more complex, in that there are about four different operations at different times, and we had to resolve the fact that the temp table SAP_TABLE was being held in a lock by our program, resulting in a deadly embrace, but our developer was able to work that out, and all is now well.
    I don't know if this solution will have applicability to any other customers, but it works for us, for now.
    SAPSERV, REMOTE, REMOTE_TABLE, and SAP_TABLE are, of course, placeholder names, not the actual server or table names, so as not to confuse anyone.
    Best regards,
    Matt

  • SESSION_PER_USER limit gets exceeded

    I have an applicatiion which queries from bugDB using a Dblink. There are an items in page which are entered/selected, that go as input into a query associated with the resion. When I submit, query get executed and records are displayed. When I do this selection/execution a few times SESSION_PER_USER limit get execeeded message is displayed.
    I think every time a query is getting executed there is a database session which gets created and this gets execeeded. I have to wait for some time, before presumably a background process clears the connection. I tried logging out or closing browser. It did not help much.
    I am not sure of the solution!
    Is there a way everytime II exceucte a query, I close the session including the database conenction in the background and then restart a new session. Experts please help out!
    Regards
    ravi

    For a long as I can remember, OPEN DB LINKS default limit is 4. Closing and then re-opening DB links is horribly inefficient. Re-quering underlying tables every time you page through a resultset can also be very inefficient too (especially when your underlying data isn't changing), especially over DB links.
    What I've done to solve my similar issues is:
    1) -- Increase open DB links to allow for more without having to close them all the time.
    -- execute immediate 'alter session close database link <linkname>';
    -- DBMS_SESSION.close_database_link ('<linkname>');
    -- DBA priv required
    alter system set open_links=20 scope=spfile;
    2) Use collections to allow the same SQL to be used with different DB links.
    -- to avoid leaving lots of cached data lying around, you probably want to consider deleting collections when you navigate away from the page that uses them.
    -- Consider page0 as a good place for the cleanup.
    Re: Dynamic database link with IR
    3) If your remote queries are slow, try including DRIVING_SITE hint
    (in my Apex experience, even if all tables are remote - maybe due to the way Apex variable substitution / bind variables are handled?)
    Re: Page load is timing out

  • Forms and Portal

    when remote db is used in forms,an error is popped up saying that remote queries could not be executed.
    Is it possible to build forms with portal for insert, update and delete operations.

    Yes. You must create the following in the Public Schema under Database Objects.
    Database link to remote database.
    Synonyms for the tables you will be using.
    Now under Providers, create a Portal DB Provider. I use the same name as the Database link created above.
    Note: (I have not been able to create forms to the remote database using any other names.)
    Now in your Portal DB Provider you can Create New Forms --&gt; Form based on table or view . In the Table or View page type in the Public Synonym you created above. This Synonym probably will not show in the list.
    This should get you started.
    Good Luck

  • The '' secondary data source is not a relational data source, or does not use an OLE DB provider.

    Hi,
    We are using Teradata as data source in SSAS 2008R2, it’s using ‘.Net Providers\.Net Data Provider for Teradata’.
    In DSV, we have two data sources both using Teradata. While processing dimensions, all dimensions coming from secondary data source has error message”
    The '' secondary data source is not a relational data source, or does not use an OLE DB provider.”
    But we are not using OLE DB provider for now, we’ve searched some threads that named queries can be replaced for secondary ds, but since we have a lot of tables, it’s hard to implement using named queries.
    Any inputs would be appreciated. 

    Hi memostone,
    When defining a data source view that contains tables, views, or columns from multiple data sources, the first data source from which you add objects to the data source view is designated as the primary data source (you cannot change the primary data
    source after it is defined). After defining a data source view based on objects from a single data source, you can then add objects from other data sources.
    If an OLAP processing or a data mining query requires data from multiple data sources in a single query, the primary data source must support remote queries using
    OpenRowset. Typically, this will be a SQL Server data source.
    Here are the restrictions:
    The primary data source should be SQL Server and support OpenRowset to the secondary data source.
    Or you design the cube in such a way that rocessing for each object only needs to access data from a single data source
    I recommand you refer to the following article:
    Defining a Data Source View (Analysis Services):
    http://technet.microsoft.com/en-us/library/ms174600.aspx
    Regards,
    Bin Long
    TechNet Community Support

Maybe you are looking for

  • Failed installation of Avast

    Trying to install the latest Avast on a MBA running 10.9.5. No sign of any problems. Machine running fine. I had installed Avast on another machine of his and it did some good, so he wanted it on his MBAir. The problem, the installer fails at the fin

  • Crane Aerospace and electronics is looking for Test Engineers with LabVIEW experience - please disregard previous post.

    Here is the correct post: Are you detail-oriented, creative, and technically skilled at Engineering design and development?  Come to Crane Aerospace & Electronics and use your excellent Engineering skills to design, improve, and deliver the next gene

  • Flash Links in Safari

    I have a problem with Safari and SWFs. In Safari, if the user ctrl-clicks or cmd-clicks a link it opens the link in a new tab in the browser. This does not seem to work for SWFs that contain links. It works fine in IE7 and Firefox. Is there a work ar

  • How do I adjust BITC rates in Final cut Pro

    I have a client who i am doing sound design for on his short film. I received an m4v in 23.98 with the dialog track embedded. I performed my dialog edit and exported a BWF from Digital performer which all synced perfectly here. He in turn dropped the

  • How to use Nokia PC Connectivity API in labview?

    I wish I could use the camera on my Nokia phone in my labview project, in this case the AT command doesn't apply. I guess Nokia_PC_Connectivity_API_3.2  (consist of LIB files and  header files)  might be useful, but I don't know how to load a static