Performance problem due to anonymous blocks

Hi,
One of the users on our database has created a procedure consisting of many blocks like the one given below:
begin
select func1(var1,var2,var3)into vcompvalue from dual;
if vcompvalue < 0 then
vcompvalue := 0;
end if;
     exception when no_data_found then
     vcompvalue := 0;      
end;
The procedure takes a long time to execute.Instead of writing a block, will writing SQL%NOTFOUND instead of the exception and merging the blocks with rest of the code improve performance?
Thanks for the help!
Vinayak Thatte

I would guess it might; you'd be cutting down the number of PL/SQL clauses that have to be parsed, etc, so if you have enough anonymous blocks you may see a difference.
On a more specific note can I just ask why you're checking for NO_DATA_FOUND? If DUAL ever throws this exception you've got serious problems with your database. If it's being thrown by your FUNC1 you might be better off (from a performance point of view) handling that exception within the function.
rgds, APC

Similar Messages

  • Performance problems due to sequential read on tables WBCROSSGT and CROSS

    Hello all,
    got the SAPNW2004s Sneak Preview ABAP installed. Performance is quite ok. But with certain dictionary operations like creating new attributes for a class I experience exceptional long runtimes and timeout dumps. In SM50 I see a sequential read on table WBCROSSGT. In OSS I can't find anything applicable yet for this release  (SAP_BASIS 700, support level 5).
    Any suggestions appreciated.
    Simon

    Hello,
    i had exactly the same problem after upgrading from MS SQL 2005 to MS SQl 2008 R2.
    Our DEV system was almost completely exhausted and normal operation wasn't possible anymore.
    SAP Note 1479008 solved the issue, even it is only "released" for MaxDB.
    Cheers, Christoph

  • Performance problem due to garbage collection?

    Hallo,
    we implemented a digital whiteboard with JavaFX: there is a scene (3200 x 2400 px) on which you can draw (insert paths) and add post-its (colored rectangles with paths or string a content). It's possible to drag the post-its around and it's also possible to pan the scene (to reach all parts of the scene which don't fit into the windows).
    The application works quite fast even with ca. 50 post-its and lots of skribbles/drawings. But after ca. 20 minutes the application gets a lot slower. Drawing curves (or handwriting) is not really possible anymore as the paths get angular and moving a post-it takes longer and longer.
    How is this possible? Is there a problem with the JavaFX garbage collection? Does anyone else have the problem that the application gets slower after a while?
    When I close the window and load it again with the same elements it's as quick as in the beginning and slows down after ca. 20 min. again.
    Thanks for your help and suggestions what could be the problem!
    Raja

    I can't really answer...
    We can eliminate saturation of scenegraph capabilities since you can reload it again without problem (at start).
    Maybe you create lot of temp objects that aren't properly collected?
    I suggest to run JVisualVM on your application and watch its memory (and CPU) usage.

  • Performance problem on wait event PX Deq: Execute Reply

    Hi everybody
    I encounter some performance problem, I've made a tkprof on a select statement and I saw that more than 95% of the elapsed time is due to event PX Deq: Execute Reply.
    This request is not CPU or paging consuming. What is this event and how could I reduce it ? Could it be a disk problem ?
    Thanks a lot, best regards
    Greg
    Here is a sample of my tkprof:
    call count cpu elapsed disk query current rows
    Parse 1 0.03 0.03 0 0 0 0
    Execute 1 0.22 2.16 68 177 12 0
    Fetch 2 0.17 511.97 38 40 0 1
    total 4 0.42 514.16 106 217 12 1
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 38
    Rows Row Source Operation
    1 PX COORDINATOR (cr=202 pr=103 pw=0 time=513984636 us)
    0 PX SEND QC (RANDOM) :TQ10003 (cr=0 pr=0 pw=0 time=0 us)
    0 HASH GROUP BY (cr=0 pr=0 pw=0 time=0 us)
    0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
    0 PX SEND HASH :TQ10002 (cr=0 pr=0 pw=0 time=0 us)
    0 HASH GROUP BY (cr=0 pr=0 pw=0 time=0 us)
    0 HASH JOIN (cr=0 pr=0 pw=0 time=0 us)
    0 BUFFER SORT (cr=0 pr=0 pw=0 time=0 us)
    0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
    0 PX SEND BROADCAST :TQ10000 (cr=0 pr=0 pw=0 time=0 us)
    473 TABLE ACCESS FULL DIM_CALL_DISTANCE (cr=8 pr=7 pw=0 time=27259 us)
    0 HASH JOIN (cr=0 pr=0 pw=0 time=0 us)
    0 BUFFER SORT (cr=0 pr=0 pw=0 time=0 us)
    0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
    0 PX SEND BROADCAST :TQ10001 (cr=0 pr=0 pw=0 time=0 us)
    4 TABLE ACCESS FULL DIM_AUDIT_CALL (cr=32 pr=31 pw=0 time=35037 us)
    0 PX BLOCK ITERATOR PARTITION: 1 16 (cr=0 pr=0 pw=0 time=0 us)
    0 TABLE ACCESS FULL FACT_CALL PARTITION: 1 48 (cr=0 pr=0 pw=0 time=0 us)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file sequential read 67 0.05 0.95
    os thread startup 4 0.21 0.80
    PX Deq: Join ACK 4 0.00 0.00
    PX Deq: Parse Reply 3 0.13 0.17
    SQL*Net message to client 2 0.00 0.00
    PX Deq: Execute Reply 304 1.96 511.68
    db file scattered read 6 0.01 0.03
    PX qref latch 12 0.00 0.00
    SQL*Net message from client 2 94.93 94.94
    PX Deq: Signal ACK 6 0.10 0.11
    enq: PS - contention 1 0.00 0.00
    ********************************************************************************

    PX Deq: Execute Reply is an idle event associated with Parallel Query. Are your tables partitioned or have a degree greater then 1?
    The tables appear to be small in size. The overhead associated with parallel query generally hinders response time on queries involving small tables.

  • Performance problem while CPU is 80% Idel ?

    Hi,
    My end users are claim for performance problem during execution of batch process.
    As you can see there are 1,745 statement executing each second.
    Awr report shows 98.1% of the time , waits on CPU .
    Also Awr report shows that Host CPU is :79.9% Idel.
    The second wait event shows only 212 seconds waits on db file sequential read.
    Yet , 4 minute in 1 hour period is seems not an issue.
    Please advise
    DB Name         DB Id    Instance     Inst Num Startup Time    Release     RAC
    QERP          xxx        erp                 1 21-Jan-13 15:40 11.2.0.2.0 ; NO
    Host Name        Platform                         CPUs Cores Sockets Memory(GB)
    erptst           HP-UX IA (64-bit)                  16    16       4     127.83
                  Snap Id      Snap Time      Sessions Curs/Sess
    Begin Snap:     40066 22-Jan-13 20:00:52       207       9.6
      End Snap:     40067 22-Jan-13 21:00:05       210       9.6
       Elapsed:               59.21 (mins)
       DB Time:              189.24 (mins)
    Cache Sizes                       Begin        End
    ~~~~~~~~~~~                  ---------- ----------
                   Buffer Cache:     8,800M     8,800M  Std Block Size:         8K
               Shared Pool Size:     1,056M     1,056M      Log Buffer:    49,344K
    Load Profile              Per Second    Per Transaction   Per Exec   Per Call
    ~~~~~~~~~~~~         ---------------    --------------- ---------- ----------
          DB Time(s):                3.2 ;               0.1 ;      0.00 ;      0.05
           DB CPU(s):                3.1 ;               0.1 ;      0.00 ;      0.05
           Redo size:          604,285.1 ;          27,271.3
       Logical reads:          364,792.3 ;          16,463.0
       Block changes:            3,629.5 ;             163.8
      Physical reads:               21.5 ;               1.0
    Physical writes:               95.3 ;               4.3
          User calls:               68.7 ;               3.1
              Parses:              212.9 ;               9.6
         Hard parses:                0.3 ;               0.0
    W/A MB processed:                1.2 ;               0.1
              Logons:                0.3 ;               0.0
            Executes:            1,745.2 ;              78.8
           Rollbacks:                1.2 ;               0.1
        Transactions:               22.2
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                Buffer Nowait %:  100.00 ;      Redo NoWait %:  100.00
                Buffer  Hit   %:   99.99 ;   In-memory Sort %:  100.00
                Library Hit   %:   99.95 ;       Soft Parse %:   99.85
             Execute to Parse %:   87.80 ;        Latch Hit %:   99.99
    Parse CPU to Parse Elapsd %:   74.76 ;    % Non-Parse CPU:   99.89
    Shared Pool Statistics        Begin    End
                 Memory Usage %:   75.37 ;  76.85
        % SQL with executions>1:   95.31 ;  85.98
      % Memory for SQL w/exec>1:   90.33 ;  82.84
    Top 5 Timed Foreground Events
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                                                               Avg
                                                              wait   % DB
    Event                                 Waits     Time(s)   (ms)   time Wait Class
    DB CPU                                           11,144          98.1
    db file sequential read              52,714         214      4    1.9 User I/O
    SQL*Net break/reset to client        29,050           6      0     .1 Applicatio
    log file sync                         2,536           6      2     .0 Commit
    buffer busy waits                     4,338           2      1     .0 Concurrenc
    Host CPU (CPUs:   16 Cores:   16 Sockets:    4)
    ~~~~~~~~         Load Average
                   Begin       End     %User   %System      %WIO     %Idle
                    0.34 ;     0.33 ;     19.7 ;      0.4 ;      1.8 ;     79.9

    Nikolay Savvinov wrote:
    if the users are complaining about performance of the batch process, then that's what you should be looking at, not the entire system.I find it strange to see "end users" and "the batch process" in the same sentence (as it was in the first post). "End users" gives me the feeling of a significant number of concurrent sessions with people waiting for results in real time at the far end, while "batch process" carries the image a small number of large scale processes running overnight to prepare the data for the following morning.
    I mention this because my first view of the AWR output was: you've got 16 CPUs, only three in use, virtually no users, and doing very little work, how can the users complain. (One answer, of course, is that the 13 CPUs could be locked out of use as far as Oracle is concerned). On the second read I decided that the "users" had gone home, and the complaint was simply that the batch process wasn't completing in time.
    In this case I think "the entire system" IS "the batch process"
    Determine which stored procedures and/or SQL statements took longer than usual and then find out why. Most likely you'll be able to find
    everything you need in AWR views (DBA_HIST_SQL%) and ASH archive (DBA_HIST_ACTIVE_SESS_HISTORY).
    If the batch process has changed dramatically and recently, then a simple first step might be to look at the current AWR report, find the few most time-consuming SQL statements, and use the awrsqrpt.sql script to find their history of execution plans.
    But I'd also just look at the expensive SQL - bearing in mind, particularly, that there are very few user calls per second, yet many hundred executions per second: it strikes me that there could be quite a lot of PL/SQL going on doing something a little bit expensive many times or some PL/SQL function that calls some SQL that used to be called rarely from an SQL statement but is now (due, perhaps to a change in plan) being called much more frequently - so check SQL Ordered by Executions.
    Regards
    Jonathan Lewis

  • How to execute a anonymous block

    hi all,
    I want to execute a anonymous pl/sql block using dbms_sql.
    Is it possible to use through dbms_sql. suppose i am having a 100 lines of code in an anonymous block will i be able to execute it through dbms_sql or plz tell me any other way to do it.
    note: i will have this anonymous block code in a table. i should take the block from the table and execute it.
    Please help me how to do it.
    thanks
    hari

    > we have a table driven approach for our project.
    That does not make sense as a table driven approach implies a data driven approach.
    Which means that the code is static/fixed and processes the data.
    It is very complex to dynamically generate code to perform actions, instead of generating data to tell the code what actions to perform.
    This is usually only done in the domains of artificial intelligence and experts systems - and one can debate just how effective that approach is...
    Which raises the question as why you would choose such an approach in the first place?
    Do you also realise that this dynamic code will likely trash the SQL Shared Pool due to a lack of bind variables? And trashing the Shared Pool that way is the #1 reason for poor performing applications using Oracle?

  • Interactive report performance problem over database link - Oracle Gateway

    Hello all;
    This is regarding a thread Interactive report performance problem over database link that was posted by Samo.
    The issue that I am facing is when I use Oracle function like (apex_item.check_box) the query slow down by 45 seconds.
    query like this: (due to sensitivity issue, I can not disclose real table name)
    SELECT apex_item.checkbox(1,b.col3)
    , a.col1
    , a.col2
    FROM table_one a
    , table_two b
    WHERE a.col3 = 12345
    AND a.col4 = 100
    AND b.col5 = a.col5
    table_one and table_two are remote tables (non-oracle) which are connected using Oracle Gateway.
    Now if I run above queries without apex_item.checkbox function the query return or response is less than a second but if I have apex_item.checkbox then the query run more than 30 seconds. I have resolved the issues by creating a collection but it’s not a good practice.
    I would like to get ideas from people how to resolve or speed-up the query?
    Any idea how to use sub-factoring for the above scenario? Or others method (creating view or materialized view are not an option).
    Thank you.
    Shaun S.

    Hi Shaun
    Okay, I have a million questions (could you tell me if both tables are from the same remote source, it looks like they're possibly not?), but let's just try some things first.
    By now you should understand the idea of what I termed 'sub-factoring' in a previous post. This is to do with using the WITH blah AS (SELECT... syntax. Now in most circumstances this 'materialises' the results of the inner select statement. This means that we 'get' the results then do something with them afterwards. It's a handy trick when dealing with remote sites as sometimes you want the remote database to do the work. The reason that I ask you to use the MATERIALIZE hint for testing is just to force this, in 99.99% of cases this can be removed later. Using the WITH statement is also handled differently to inline view like SELECT * FROM (SELECT... but the same result can be mimicked with a NO_MERGE hint.
    Looking at your case I would be interested to see what the explain plan and results would be for something like the following two statements (sorry - you're going have to check them, it's late!)
    WITH a AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_one),
    b AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_two),
    sourceqry AS
    (SELECT  b.col3 x
           , a.col1 y
           , a.col2 z
    FROM table_one a
        , table_two b
    WHERE a.col3 = 12345
    AND   a.col4 = 100
    AND   b.col5 = a.col5)
    SELECT apex_item.checkbox(1,x), y , z
    FROM sourceqry
    WITH a AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_one),
    b AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_two)
    SELECT  apex_item.checkbox(1,x), y , z
    FROM table_one a
        , table_two b
    WHERE a.col3 = 12345
    AND   a.col4 = 100
    AND   b.col5 = a.col5If the remote tables are at the same site, then you should have the same results. If they aren't you should get the same results but different to the original query.
    We aren't being told the real cardinality of the inners select here so the explain plan is distorted (this is normal for queries on remote and especially non-oracle sites). This hinders tuning normally but I don't think this is your problem at all. How many distinct values do you normally get of the column aliased 'x' and how many rows are normally returned in total? Also how are you testing response times, in APEX, SQL Developer, Toad SQLplus etc?
    Sorry for all the questions but it helps to answer the question, if I can.
    Cheers
    Ben
    http://www.munkyben.wordpress.com
    Don't forget to mark replies helpful or correct ;)

  • Performance Problems - CPU

    Hi all,
    I'm having some performance problems and i have generated an AWR of a day and i have seen this following things:
    Top 5 Timed Events Avg %Total
    ~~~~~~~~~~~~~~~~~~ wait Call
    Event Waits Time (s) (ms) Time Wait Class
    CPU time 50,318 41.7
    db file sequential read 6,688,472 32,711 5 27.1 User I/O
    Backup: sbtwrite2 1,068,309 7,903 7 6.6 Administra
    db file scattered read 1,012,065 6,999 7 5.8 User I/O
    PX Deq Credit: send blkd 231,401 4,989 22 4.1 Other
    Operating System Statistics DB/Inst: CAPDB14P/capdb14p1 Snaps: 15710-15778
    Statistic Total
    AVG_BUSY_TIME 3,221,704
    AVG_IDLE_TIME 4,923,831
    AVG_IOWAIT_TIME 2,302,776
    AVG_SYS_TIME 537,429
    AVG_USER_TIME 2,682,900
    BUSY_TIME 6,446,121
    IDLE_TIME 9,850,381
    IOWAIT_TIME 4,608,322
    SYS_TIME 1,077,598
    USER_TIME 5,368,523
    LOAD 0
    OS_CPU_WAIT_TIME 1,999,898,469,700
    RSRC_MGR_CPU_WAIT_TIME 0
    VM_IN_BYTES 12,201,893,888
    VM_OUT_BYTES 476,655,616
    PHYSICAL_MEMORY_BYTES 8,568,512,512
    NUM_CPUS 2
    NUM_CPU_SOCKETS 2
    ###########################3
    I think that we are having CPU problems here !!
    All my memory caches are good, 99% hit.
    Anybody agree with me???
    Tks,
    Paulo

    I have problems on some queries that have another wait event related to RAC.
    "gc cs multi block request" is taking a lot of time on some queries. These queries run very fast at another databas that isn't a RAC database.
    Example:
    1-Tables has the same number of rows!!!!!
    2-Both tables and indexes are analyzed using the same tool (DBMA_STATS)
    ####RAC DATABASE####
    SELECT 1 from dual
    WHERE NOT EXISTS (SELECT 1
    FROM mensalidade a
    WHERE data_vencimento >= CHAR_TO_DATE('20070201'));
    ----Explain
    SELECT STATEMENT, GOAL = ALL_ROWS               4     1     
    FILTER                         
    FAST DUAL               2     1     
    PX COORDINATOR FORCED SERIAL                         
    PX SEND QC (RANDOM)     SYS     :TQ10000     2     1     7
    PX BLOCK ITERATOR               2     1     7
    INDEX FAST FULL SCAN     BRCAPDB2     IMENSALIDADE1     2     1     7
    ----It takes more than 500 seconds to run
    ####STANDALONE DATABASE####
    SELECT 1 from dual
    WHERE NOT EXISTS (SELECT 1
    FROM mensalidade a
    WHERE data_vencimento >= CHAR_TO_DATE('20070201'));
    ----Explain
    SELECT STATEMENT, GOAL = ALL_ROWS               4     1     
    FILTER                         
    FAST DUAL               2     1     
    PX COORDINATOR FORCED SERIAL                         
    PX SEND QC (RANDOM)     SYS     :TQ10000     2     2     16
    PX BLOCK ITERATOR               2     2     16
    TABLE ACCESS FULL     BRCAPDB2     MENSALIDADE     2     2     16
    ----It takes 0.1 seconds to run

  • Performance problems when running PostgreSQL on ZFS and tomcat

    Hi all,
    I need help with some analysis and problem solution related to the below case.
    The long story:
    I'm running into some massive performance problems on two 8-way HP ProLiant DL385 G5 severs with 14 GB ram and a ZFS storage pool in raidz configuration. The servers are running Solaris 10 x86 10/09.
    The configuration between the two is pretty much the same and the problem therefore seems generic for the setup.
    Within a non-global zone I’m running a tomcat application (an institutional repository) connecting via localhost to a Postgresql database (the OS provided version). The processor load is typically not very high as seen below:
    NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU                            
        49 postgres  749M  669M   4,7%   7:14:38  13%
         1 jboss    2519M 2536M    18%  50:36:40 5,9%We are not 100% sure why we run into performance problems, but when it happens we experience that the application slows down and swaps out (according to below). When it settles everything seems to turn back to normal. When the problem is acute the application is totally unresponsive.
    NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU
        1 jboss    3104M  913M   6,4%   0:22:48 0,1%
    #sar -g 5 5
    SunOS vbn-back 5.10 Generic_142901-03 i86pc    05/28/2010
    07:49:08  pgout/s ppgout/s pgfree/s pgscan/s %ufs_ipf
    07:49:13    27.67   316.01   318.58 14854.15     0.00
    07:49:18    61.58   664.75   668.51 43377.43     0.00
    07:49:23   122.02  1214.09  1222.22 32618.65     0.00
    07:49:28   121.19  1052.28  1065.94  5000.59     0.00
    07:49:33    54.37   572.82   583.33  2553.77     0.00
    Average     77.34   763.71   771.43 19680.67     0.00Making more memory available to tomcat seemed to worsen the problem or at least didn’t prove to have any positive effect.
    My suspicion is currently focused on PostgreSQL. Turning off fsync boosted performance and made the problem less often to appear.
    An unofficial performance evaluation on the database with “vacuum analyze” took 19 minutes on the server and only 1 minute on a desktop pc. This is horrific when taking the hardware into consideration.
    The short story:
    I’m trying different steps but running out of ideas. We’ve read that the database block size and file system block size should match. PostgreSQL is 8 Kb and ZFS is 128 Kb. I didn’t find much information on the matter so if any can help please recommend how to make this change…
    Any other recommendations and ideas we could follow? We know from other installations that the above setup runs without a single problem on Linux on much smaller hardware without specific tuning. What makes Solaris in this configuration so darn slow?
    Any help appreciated and I will try to provide additional information on request if needed…
    Thanks in advance,
    Kasper

    raidz isnt a good match for databases. Databases tend to require good write performance for which mirroring works better.
    Adding a pair of SSD's as a ZIL would probably also help, but chances are its not an option for you..
    You can change the record size by "zfs set recordsize=8k <dataset>"
    It will only take effect for newly written data. Not existing data.

  • Performance problems with DFSN, ABE and SMB

    Hello,
    We have identified a problem with DFS-Namespace (DFSN), Access Based Enumeration (ABE) and SMB File Service.
    Currently we have two Windows Server 2008 R2 servers providing the domain-based DFSN in functional level Windows Server 2008 R2 with activated ABE.
    The DFSN servers have the most current hotfixes for DFSN and SMB installed, according to http://support.microsoft.com/kb/968429/en-us and http://support.microsoft.com/kb/2473205/en-us
    We have only one AD-site and don't use DFS-Replication.
    Servers have 2 Intel X5550 4 Core CPUs and 32 GB Ram.
    Network is a LAN.
    Our DFSN looks like this:
    \\contoso.com\home
        Contains 10.000 Links
        Drive mapping on clients to subfolder \\contoso.com\home\username
    \\contoso.com\group
        Contains 2500 Links
        Drive mapping on clients directly to \\contoso.com\group
    On \\contoso.com\group we serve different folders for teams, projects and other groups with different access permissions based on AD groups.
    We have to use ABE, so that users see only accessible Links (folders)
    We encounter sometimes multiple times a day enterprise-wide performance problems for 30 seconds when accessing our Namespaces.
    After six weeks of researching and analyzing we were able to identify the exact problem.
    Administrators create a new DFS-Link in our Namespace \\contoso.com\group with correct permissions using the following command line:
    dfsutil.exe link \\contoso.com\group\project123 \\fileserver1\share\project123
    dfsutil.exe property sd grant \\contoso.com\group\project123 CONTOSO\group-project123:RX protect replace
    This is done a few times a day.
    There is no possibility to create the folder and set the permissions in one step.
    DFSN process on our DFSN-servers create the new link and the corresponding folder in C:\DFSRoots.
    At this time, we have for example 2000+ clients having an active session to the root of the namespace \\contoso.com\group.
    Active session means a Windows Explorer opened to the mapped drive or to any subfolder.
    The file server process (Lanmanserver) sends a change notification (SMB-Protocol) to each client with an active session \\contoso.com\group.
    All the clients which were getting the notification now start to refresh the folder listing of \\contoso.com\group
    This was identified by an network trace on our DFSN-servers and different clients.
    Due to ABE the servers have to compute the folder listing for each request.
    DFS-Service on the servers doen't respond for propably 30 seconds to any additional requests. CPU usage increases significantly over this period and went back to normal afterwards. On our hardware from about 5% to 50%.
    Users can't access all DFS-Namespaces during this time and applications using data from DFS-Namespace stop responding.
    Side effect: Windows reports on clients a slow-link detection for \\contoso.com\home, which can be offline available for users (described here for WAN-connections: http://blogs.technet.com/b/askds/archive/2011/12/14/slow-link-with-windows-7-and-dfs-namespaces.aspx)
    Problem doesn't occure when creating a link in \\contoso.com\home, because users have only a mapping to subfolders.
    Currently, the problem doesn't occure also for \\contoso.com\app, because users usually don't use Windows Explorer accessing this mapping.
    Disabling ABE reduces the DFSN freeze time, but doesn't solve the problem.
    Problem also occurs with Windows Server 2012 R2 as DFSN-server.
    There is a registry key available for clients to avoid the reponse to the change notification (NoRemoteChangeNotify, see http://support.microsoft.com/kb/812669/en-us)
    This might fix the problem with DFSN, but results in other problems for the users. For example, they have to press F5 for refreshing every remote directory on change.
    Is there a possibility to disable the SMB change notification on server side ?
    TIA and regards,
    Ralf Gaudes

    Hi,
    Thanks for posting in Microsoft Technet Forums.
    I am trying to involve someone familiar with this topic to further look at this issue. There might be some time delay. Appreciate your patience.
    Thank you for your understanding and support.
    Regards.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Performance problems with File Adapter and XI freeze

    Hi NetWeaver XI geeks,
    We are deploying a XI based product and encounter some huge performance problems. Here after the scenario and the issues:
    - NetWeaver XI 2004
    - SAP 4.6c
    - Outbound Channel
    - No mapping used and only the iDocs Adapter is involved in the pipeline processing
    - File Adapter
    - message file size < 2Ko
    We have zeroed down the problem to Idoc adapter’s performance.
    We are using a file channel and  every 15 seconds a file in a valid Idoc format is placed in a folder, Idoc adapter picks up the file from this folder and sends it  to the SAP R/3 instance.
    For few minutes (approx 5 mins) it works (the CPU usage is less then 20% even if processing time seems huge : <b>5sec/msg</b>) but after this time the application gets blocked and the CPU gets overloaded at 100% (2 processes disp_worker.exe at 50% each).
    If we inject several files in the source folder at the same time or if we decrease the time gap (from 15 seconds to 10 seconds) between creation of 2 Idoc files , the process blocks after posting  2-3 docs to SAP R/3.
    Could you point us some reasons that could provoke that behavior?
    Basically looking for some help in improving performance of the Idoc adapter.
    Thanks in advance for your help and regards,
    Adalbert

    Hi Bhavesh,
    Thanks for your suggestions. We will test...
    We wonder if the hardware is not the problem of this extremely poor performance.
    Our XI server is:
    •     Windows 2003 Server
    •     Processors: 2x3GHZ
    •     RAM: 4GB (the memory do not soak)
    The messages are well formed iDocs = single line INVOICES.
    Some posts are talking 2000 messages processed in some seconds... whereas we got 5 sec per message.
    Tnanks for your help.
    Adalbert

  • Performance problems with XMLTABLE and XMLQUERY involving relational data

    Hello-
    Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
    * Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
    * Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
    I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.

    <Note>Long post, sorry.</Note>
    First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
    •     Converting legacy tabular data into XML records; and
    •     Performing code table lookups for coded values in XML records.
    There are three things I want to accomplish with this post:
    •     Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
    •     Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
    •     Highlight remaining performance issues in hopes that we can solve them
    What we are trying to accomplish:
    •     Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
    •     Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
    •     Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
    •     Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
    As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
    Why we did and why:
    •     Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
    •     Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
    •     Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
    •     Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
    •     Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
    •     Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
    •     Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
    What issues remain?
    We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
    Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
    -- The main record table:
    create table RECORDS (
    SSN varchar2(20),
    XMLREC sys.xmltype
    xmltype column XMLREC store as binary xml;
    create index records_ssn on records(ssn);
    -- A dozen code tables represented by one like this:
    create table CODES (
    CODE varchar2(4),
    DESCRIPTION varchar2(500)
    create index codes_code on codes(code);
    -- Some XML records with coded values (the real records are much more complex of course):
    -- I think this took about a minute or two
    DECLARE
    ssn varchar2(20);
    xmlrec xmltype;
    i integer;
    BEGIN
    xmlrec := xmltype('<?xml version="1.0"?>
    <Root>
    <Id>123456789</Id>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    </Root>
    for i IN 1..100000 loop
    insert into records(ssn, xmlrec) values (i, xmlrec);
    end loop;
    commit;
    END;
    -- Some code data like this (ignoring date ranges on codes):
    DECLARE
    description varchar2(100);
    i integer;
    BEGIN
    description := 'This is the code description ';
    for i IN 1..3000 loop
    insert into codes(code, description) values (to_char(i), description);
    end loop;
    commit;
    end;
    -- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
    -- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
    -- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
    -- Note we are accessing a single XML record based on SSN
    -- Note also we are reusing the one test code table multiple times for convenience of this test
    select xmlquery('
    for $r in Root
    return
    <Root>
    <Id>123456789</Id>
    {for $e in $r/Element
        return
        <Element>
          <Subelement1>
            {$e/Subelement1/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement1>
    <Subelement2>
    {$e/Subelement2/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
    </Description>
    </Subelement2>
    <Subelement3>
    {$e/Subelement3/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement3>
    </Element>
    </Root>
    ' passing xmlrec returning content)
    from records
    where ssn = '10000';
    The plan shows the nested loop access that slows things down.
    By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
    Operation Object
    |SELECT STATEMENT ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
    -- Add an xmlindex. Takes about 2.5 minutes
    create index records_record_xml ON records(xmlrec)
    indextype IS xdb.xmlindex;
    Operation Object
    |SELECT STATEMENT ()
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
    I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity.

  • Performance problems with iTunes 7.4.3.1 + Vista

    I've got a really annoying problem since "upgrading" my PC to Vista Ultimate (actually it was a clean install).
    Vista all working and performing fine. Download and install iTunes + import lib. When I went to use iTunes as soon as I open it it's already starting to struggle. It pauses a few times when opening then eventually is operable. I use the "Cover Flow View" when looking at my library and scrolling back/forward is nasty. Very sluggish and slow. Firing up a video is the same. It's very choppy.
    After searching I've tried the tricks with disabling Direct3D etc. in Quicktime but not surprisingly it's made no difference.
    One thing to note is that my soundcard (m-audio 24/96) is not yet supported and is disabled in device manager. Not happy about this but I figured I could still use itunes+ipod so not the end of the world (yet). I doubt this would cause this problem but you never know.
    Additionally my itunes music folder is pointed at a share on my network. Always has been from day 1 on 100mbit and it worked fine and I'm now on a gigabit LAN. Again, I doubt this should make any difference.
    PC vital stats are below. Hardly cutting edge any more I know but still more than capable.
    P4 3ghz (sock 478 version)
    2GB Ram
    Nvidia 6800GT
    Vista Ultimate
    iTunes 7.4.3.1
    5th gen iPod
    I'm getting really irritated by this problem now
    Any ideas?

    divtag wrote:
    Very strange. Yesterday all the problems magically disappeared! It's now working perfectly.
    I loaded about 30GB of lossless tracks into the lib and did some seemingly unrelated windows updates yesterday but that's it. I guess there might have been something in one of the updates so we'll see how long before another one breaks it again.
    It has nothing to do with windows, or any updates from microsoft. iTunes is coded by Apple, and poorly at that. Many suspect they do it on purpose because it is in their interest to trick the dumb minded public into thinking that a Mac is faster than a PC. Which isnt really possible since they run on the same hardware now.
    There can be slight performance differences due to OS, but there is No way in **** that windows would ever slow down iTunes to its present state.
    Windows Media Player is silky smooth and winamp is even smoother......
    iTunes is horrible.
    The features and tools are nice, the hardware it syncs to (ipod iphone etc) are all lovely... Its just too dam bad us Windows users get NO support from Apple what so ever.
    Its payback for what Microsoft did with the windows media player on the mac. WMP was programmed like crap on the mac, and they eventually just stopped developing it for the mac. MS has no interest in Apple looking good, and Apple has no interest in windows looking good.
    Its a dam shame too, because as they pull these BS games with each other, they ignore us users, who have paid our hard earned money for their products.
    Message was edited by: DwindleFlip

  • Serious performance problem - SELECT DISTINCT x.JDOCLASSX FROM x

    I am noticing a huge performance problem when trying to access a member that
    is lazily loaded:
    MonitorStatus previousStatus = m.getStatus();
    This causes the following query to be executed:
    SELECT DISTINCT MONITORSTATUSX.JDOCLASSX FROM MONITORSTATUSX
    This table has 3 million records and this SQL statement takes 3 minutes to
    execute! Even worse, my app heavily uses threads, so this statement is
    executed in each of the 32 threads. As a result the application stops.
    Is there any way that I can optimize this? And more importantly, can Kodo
    handle a multithreaded app like this with a huge database? I've been having
    a lot of performance problems since I've started doing stress & load
    testing, and I'm thinking Kodo isn't ready for this type of application.
    Thanks,
    Michael

    You can prevent this from happening by explicitly enumerating the valid
    persistent types in a property. See
    http://docs.solarmetric.com/manual.html#com.solarmetric.kodo.PersistentTypes
    for details.
    >
    Inconveniently, this nugget of performance info is not listed in the
    optimization guide. I'll add in an entry for it.This setting did in fact prevent the query from running which fixed the
    problem. It definitely belongs in the optimization guide.
    And more importantly, can Kodo
    handle a multithreaded app like this with a huge database? I've beenhaving
    a lot of performance problems since I've started doing stress & load
    testing, and I'm thinking Kodo isn't ready for this type of application.I'd like to find out more information about details about your issues. We
    do a decent amount of stress / load testing internally, but there are
    always use cases that we don't test. Please send me an email (I'm assuming
    that [email protected] is not really your address) and let's
    figure out some way to do an analysis of what you're seeing.This email is just for posting to usenet, to avoid spam. I'm now running my
    app through stress/load testing so I hope to discover any remaining issues
    before going into production. As of this morning the system seems to be
    performing quite well. Now the biggest performance problem for me is the
    lack of what I think is called "outer join". I know you'll have this in 3.0
    but I'm suprised you don't have this already because not having it really
    affects performance. I already had to code one query by hand with JDBC due
    to this. It was taking 15+ minutes with Kodo and with my JDBC version it
    only takes a few seconds. There are lots of anti-JDO people and performance
    issues like this really give them ammunition. Overall I just have the
    impression that Kodo hasn't been used on many really large scale projects
    with databases that have millions of records.
    Thanks for configuration fix,
    Michael

  • Getting value with an anonymous block using ODP

    Hi all!
    I have a problem I hope someone can help me with. I believe it to be a minor one. I am trying to imbed an anonymous block into my .net app and use it dynamically to get a value from the database depending on the values in a tables. Since my procedure is quite large I am displaying a small example proc for simplicity purposes. Basically I want to execute an anonymous block from my app that will return a value (not a row or rows) from the database. The code is below:
    Private Sub test()
    Dim cn As New OracleConnection(profileString)
    Try
    Dim sb As New System.Text.StringBuilder
    sb.Append("Declare ")
    sb.Append("v_maxnum varchar2(6); ")
    sb.Append("Begin ")
    sb.Append("Select max(to_number(email_address_id)) into ")
    sb.Append("v_maxnum from CVWH14_CDRV_TEST.EMAIL_ADDRESS_TBL; ")
    sb.Append("dbms_output.put_line(v_maxnum); ")
    sb.Append("Exception ")
    sb.Append("When Others ")
    sb.Append("Then ")
    sb.Append("dbms_output.put_line('Program run errors have occurred.'); ")
    sb.Append("End; ")
    Dim cmd As New OracleCommand(sb.ToString, cn)
    With cmd
    cmd.CommandType = CommandType.Text
    Dim parm As New OracleParameter
    parm.ParameterName = "v_maxnum"
    parm.OracleType = OracleType.VarChar
    parm.Direction = ParameterDirection.Output
    parm.Size = 6
    cmd.Connection.Open()
    Dim ret As Object = cmd.ExecuteScalar()
    Dim res As String = cmd.Parameters.Item(0).Value.ToString -- **Error is occuring here**
    cmd.Connection.Close()
    cmd.Dispose()
    End With
    Catch ex As Exception
    MessageBox.Show(ex.Message, "Error")
    'End If
    If cn.State = ConnectionState.Open Then
    cn.Close()
    End If
    End Try
    End Sub
    The exception error reads "Invalid Index 0 for this OracleParameterCollection with Count=0."
    If I can figure out how to get a parameter value from the database via the anonymous block, I can apply the logic to the real application. Any help or direction I could receive would be greatly appreciated. Thanks for reading this post!

    Thank you for responding. The code that I posted was just one of many ways I have tried. I retried the proc making just 2 changes:
    Private Sub test()
    Dim cn As New OracleConnection(profileString)
    Try
    Dim sb As New System.Text.StringBuilder
    sb.Append("Declare ")
    sb.Append("v_maxnum varchar2(6); ")
    sb.Append("Begin ")
    sb.Append("Select max(to_number(email_address_id)) into ")
    sb.Append("v_maxnum from CVWH14_CDRV_TEST.EMAIL_ADDRESS_TBL; ")
    sb.Append("dbms_output.put_line(:v_maxnum); ") -- !Changed this to a bind variable!
    sb.Append("Exception ")
    sb.Append("When Others ")
    sb.Append("Then ")
    sb.Append("dbms_output.put_line('Program run errors have occurred.'); ")
    sb.Append("End; ")
    Dim cmd As New OracleCommand(sb.ToString, cn)
    With cmd
    cmd.CommandType = CommandType.Text
    Dim parm As New OracleParameter
    parm.ParameterName = ":v_maxnum" -- !Changed this to a bind variable!
    parm.OracleType = OracleType.VarChar
    parm.Direction = ParameterDirection.Output
    parm.Size = 6
    cmd.Connection.Open()
    Dim ret As Object = cmd.ExecuteScalar() -- !The error is now occuring here!
    Dim res As String = cmd.Parameters.Item(0).Value.ToString
    cmd.Connection.Close()
    cmd.Dispose()
    End With
    Catch ex As Exception
    MessageBox.Show(ex.Message, "Error")
    If cn.State = ConnectionState.Open Then
    cn.Close()
    End If
    End Try
    End Sub
    I am now getting the error message "Not all variables bound". Any more help or direction that you could throw my way would be greatly appreciated.

Maybe you are looking for

  • Boot camp / Mac OS X Leopard clock issue

    Howdy, Yesterday, I installed XP on my MacBook Pro using Boot Camp and I've come across a quirk that now comes up on Mac OS X. The clock is 5 hours behind! When I start up in Windows, the time is fine (set for US Central time zone). However, when I s

  • Eight-Class Model QoS for voice and video

    One of the QoS recomendation in the SRND "Enterprise QoS" is to create a Eight-Class QoS Model utilizing a seperate priority queue for voice and video. It says that even though you have only one physical priority queue, that LLQ has an implicit polic

  • XSLT Value Mapping

    Hello Experts, I have a XSLT senario where XSL map is calling a xml file for value look ups. Can someody help me implement the senarion in PI? - Rajan

  • Add-on Builder only lists the first 10 Addon-Projects

    Hello, I'm using the Add-on Builder since 7 days. In my "private addon" section, I have 16 add-ons (Add-on projects). URL here: https://builder.addons.mozilla.org/user/private_addons/ When I try to list the projects nr. 11-16 I click on the Link labe

  • Error 1669 after updating to ios 7, Error 1669 after updating to ios 7

    Error 1669 after updating to ios 7, Error 1669 after updating to ios 7