Performance Problems on Faces Navigation Diagram and Hyperthreading query

Am I the only one having performance problems when dealing with Faces-Config Diagrams of about 35 JSPs displayed on the sheet. using Jdev 10.1.3 It's taking my workstation about a full minute and a half to update the name of an arrow. The most stressed component during this task seems to be the CPU.
And just another question has anybody investigated how is the performance of Jdev affected by either enabling or disabling hyperthreading? In my case my CPU usage manages to reach only 50%. I'm tempted to switch HT off to let JDev use all the cpu power. if that would be the case.

Hello Diego,
you mentioned that you compared a BEx Query with the Web INtelligence report. Could you provide more details here ?
- what are the elements in the rows, columns and free characterisitcs in the BEx Query ?
- was the query execute as designed in the BEx Query Designer with BEx Web Reporting ?
- what are the elements in the WebIntelligence Query panel ?
thanks
Ingo

Similar Messages

  • Navigation block and the Query Table Side by side in output

    Hi
    I am unable to get the navigation block and the query table side by side when i execute the WAD .Is it possible to place those 2 objects next to each other in the WAD output.In the WAD design those 2 are placed side by side.
    Regards,Pra

    I meant Html Table:
    In WAD in menu you can choose Insert -> Table -> Insert Table
    You need 1 row, 2 columns. In one of the column you should put navigation block and in the other table.
    Regards
    Erwin
    Edited by: Erwin  Buda on Feb 5, 2009 2:06 PM

  • Performance Problems with "For all Entries" and a big internal table

    We have big Performance Problems with following Statement:
    SELECT * FROM zeedmt_zmon INTO TABLE gt_zmon_help
      FOR ALL ENTRIES IN gt_zmon_help
        WHERE
        status = 'IAI200' AND
        logdat IN gs_dat AND
        ztrack = gt_zmon_help-ztrack.
    In the internal table gt_zmon_help are over 1000000 entries.
    Anyone an Idea how to improve the Performance?
    Thank you!

    >
    Matthias Weisensel wrote:
    > We have big Performance Problems with following Statement:
    >
    >  
    SELECT * FROM zeedmt_zmon INTO TABLE gt_zmon_help
    >   FOR ALL ENTRIES IN gt_zmon_help
    >     WHERE
    >     status = 'IAI200' AND
    >     logdat IN gs_dat AND
    >     ztrack = gt_zmon_help-ztrack.
    >
    > In the internal table gt_zmon_help are over 1000000 entries.
    > Anyone an Idea how to improve the Performance?
    >
    > Thank you!
    You can't expect miracles.  With over a million entries in your itab any select is going to take a bit of time. Do you really need all these records in the itab?  How many records is the select bringing back?  I'm assuming that you have got and are using indexes on your ZEEDMT_ZMON table. 
    In this situation, I'd first of all try to think of another way of running the query and restricting the amount of data, but if this were not possible I'd just run it in the background and accept that it is going to take a long time.

  • Performance problem between Oracle.DataAccess v1 and v2

    Hi, I have serious performance problem with OracleDataReader when I use the GetValues method.
    My server is Oracle 9.2.0.7, and i use ODAC v10.2.0.221
    I create a dummy table for benchmark :
    create table test (a varchar2(50), b number)
    begin
    for i in 1..62359 loop
    insert into test values ('Values ' || i, i);
    end loop;
    commit;
    end;
    I use the same code for benchmark Framework v1 and Framework v2.
    Code :
    try {
    OracleConnection c = new OracleConnection("user id=saturne_dbo;password=***;data source=satedfx;");
    c.Open();
    go(c);
    c.Close();
    catch (Exception ex) {
    MessageBox.Show(ex.Message);
    private void go(IDbConnection c) {
    IDbCommand cmd = c.CreateCommand();
    cmd.CommandText = "select * from test";
    cmd.CommandType = CommandType.Text;
    DateTime dt = DateTime.Now;
    IDataReader reader = cmd.ExecuteReader();
    int count = 0;
    while (reader.Read()) {
    object[] fields = new object[reader.FieldCount];
    reader.GetValues(fields);
    count++;
    reader.Close();
    TimeSpan eps = DateTime.Now - dt;
    MessageBox.Show("Time " + count + " : " + eps.TotalSeconds);
    Result are :
    Framework v1 with OracleDataAccess 1.10.2.2.20 "Time 62359 : 0.5"
    Framework v2 with OracleDataAccess 2.10.2.2.20 "Time 62359 : 3.57" FACTOR 6 !!!!!
    I notice same problem with oleDb provider and Microsoft Oracle Client provider..
    It's a serious problem for my production server, the time calculation explode...
    Where is the explication ?
    Do u know solution ?

    Can you please try out following -
    1. Create a .NET 1.x DLL with your benchmark code. This will obviously use ODP.NET for .NET 1.x.
    2. Call this assembly routine from a .NET 1.x executable and note the results.
    3. Now call this assembly routine from a .NET 2.0 executable and note the results.
    The idea is to always use "ODP.NET for .NET 1.x" even in .NET 2.0 runtime. This will tell us whether the performance degradation is a runtime issue.

  • Query performance problem - events 2505-read cache and 2510-write cache

    Hi,
    I am experiencing severe performance problems with a query, specifically with events 2505 (Read Cache) and 2510 (Write Cache) which went up to 11000 seconds on some executions. Data Manager (400 s), OLAP data selection (90 s) and OLAP user exit (250 s) are other the other event with noticeable times. All other events are very quick.
    The query settings (RSRT) are
    persistent cache across each app server -> cluster table,
    update cache in delta process is checked ->group on infoprovider type
    use cache despite virtual characteristics/key figs checked (one info-cube has1 virtual key figure which should have a static result for a day)
    =>Do you know how I can get more details than what's in 0TCT_C02 to break down the read and write cache events time or do you have any recommandation?
    I have checked and no dataloads were in progres on the info-providers and no master data loads (change run). Overall system performance was acceptable for other queries.
    Thanks

    Hi,
    Looks like you're using BDB, not BDB JE, and this is the BDB JE forum. Could you please repost here?:
    Berkeley DB
    Thanks,
    mark

  • Performance problem with Integration with COGNOS and Bex

    Hi Gems
    I have a performance problem with some of my queries when integrating with the COGNOS
    My query is simple which gets the data for the date interval : "
    From Date: 20070101
    To date:20070829
    When executing the query in the Bex it takes 2mins but when it is executed in the COGNOS it takes almost 10mins and above..
    Any where can we debug the report how the data is sending to the cognos. Like debugging the OLEDB ..
    and how to increase the performance.. of the query in the Cognos ..
    Thanks in Advance
    Regards
    AK

    Hi,
    Please check the following CA Unicenter config files on the SunMC server:
    - is the Event Adapter (ea-start) running ?, without these daemon no event forwarding is done the CA Unicenter nor discover from Ca unicenter is working.
    How to debug:
    - run ea-start in debug mode:
    # /opt/SUNWsymon/SunMC-TNG/sbin/ea-start -d9
    - check if the Event Adaptor is been setup,
    # /var/opt/SUNWsymon/SunMC-TNG/cfg_sunmctotng
    - check the CA log file
    # /var/opt/SUNWsymon/SunMC-TNG/SunMCToTngAdaptorMain.log
    After that is all fine check this side it explains how to discover an SunMC agent from CA Unicenter.
    http://docs.sun.com/app/docs/doc/817-1101/6mgrtmkao?a=view#tngtrouble-6
    Kind Regards

  • Performance problems related to Timesheet entry and Time Admin processing.

    Implementing 9.0, they are in UAT, experiencing performance delays on timeadmin and timesheet page when using apply rules button, they have quite a bit of rules and when number of users increase to 30 concurrent users severe performance issue are experienced on timesheet, at this point they are more concerned with the timesheet performance than the time admin performance, they have delayed their go live date until this issue gets resolved.
    In the Performance Monitor data were are getting several failed status' for the PMU 'JOLT Request' and PMU Details 'ICPanel'. In the additional data area it states:
    Error Status Code:
    Jolt ServiceException: Jolt Errno 100 JoltException.TPEJOLT
    PeopleSoft 9.0
    Weblogic 9.2
    Database:SQL Server 5 SP3
    Windows Server 2003 SP2

    Have you tried raising a SR on oracle support?
    Also, Timesheet performance is a known issue and there are multiple such issues reported on metalink. You can look at the issues for potential solutions!
    https://support.oracle.com/CSP/main/article?cmd=show&id=659033.1&type=NOT
    https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=PROBLEM&id=857761.1
    https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=PROBLEM&id=961924.1

  • BDB read performance problem: lock contention between GC and VM threads

    Problem: BDB read performance is really bad when the size of the BDB crosses 20GB. Once the database crosses 20GB or near there, it takes more than one hour to read/delete/add 200K keys.
    After a point, of these 200K keys there are about 15-30K keys that are new and this number eventually should come down and there should not be any new keys after a point.
    Application:
    Transactional Data Store application. Single threaded process, that's trying to read one key's data, delete the data and add new data. The keys are really small (20 bytes) and the data is large (grows from 1KB to 100KB)
    On on machine, I have a total of 3 processes running with each process accessing its own BDB on a separate RAID1+0 drive. So, according to me there should really be no disk i/o wait that's slowing down the reads.
    After a point (past 20GB), There are about 4-5 million keys in my BDB and the data associated with each key could be anywhere between 1KB to 100KB. Eventually every key will have 100KB data associated with it.
    Hardware:
    16 core Intel Xeon, 96GB of RAM, 8 drive, running 2.6.18-194.26.1.0.1.el5 #1 SMP x86_64 x86_64 x86_64 GNU/Linux
    BDB config: BTREE
    bdb version: 4.8.30
    bdb cache size: 4GB
    bdb page size: experimented with 8KB, 64KB.
    3 processes, each process accesses its own BDB on a separate RAIDed(1+0) drive.
    envConfig.setAllowCreate(true);
    envConfig.setTxnNoSync(ourConfig.asynchronous);
    envConfig.setThreaded(true);
    envConfig.setInitializeLocking(true);
    envConfig.setLockDetectMode(LockDetectMode.DEFAULT);
    When writing to BDB: (Asynchrounous transactions)
    TransactionConfig tc = new TransactionConfig();
    tc.setNoSync(true);
    When reading from BDB (Allow reading from Uncommitted pages):
    CursorConfig cc = new CursorConfig();
    cc.setReadUncommitted(true);
    BDB stats: BDB size 49GB
    $ db_stat -m
    3GB 928MB Total cache size
    1 Number of caches
    1 Maximum number of caches
    3GB 928MB Pool individual cache size
    0 Maximum memory-mapped file size
    0 Maximum open file descriptors
    0 Maximum sequential buffer writes
    0 Sleep after writing maximum sequential buffers
    0 Requested pages mapped into the process' address space
    2127M Requested pages found in the cache (97%)
    57M Requested pages not found in the cache (57565917)
    6371509 Pages created in the cache
    57M Pages read into the cache (57565917)
    75M Pages written from the cache to the backing file (75763673)
    60M Clean pages forced from the cache (60775446)
    2661382 Dirty pages forced from the cache
    0 Dirty pages written by trickle-sync thread
    500593 Current total page count
    500593 Current clean page count
    0 Current dirty page count
    524287 Number of hash buckets used for page location
    4096 Assumed page size used
    2248M Total number of times hash chains searched for a page (2248788999)
    9 The longest hash chain searched for a page
    2669M Total number of hash chain entries checked for page (2669310818)
    0 The number of hash bucket locks that required waiting (0%)
    0 The maximum number of times any hash bucket lock was waited for (0%)
    0 The number of region locks that required waiting (0%)
    0 The number of buffers frozen
    0 The number of buffers thawed
    0 The number of frozen buffers freed
    63M The number of page allocations (63937431)
    181M The number of hash buckets examined during allocations (181211477)
    16 The maximum number of hash buckets examined for an allocation
    63M The number of pages examined during allocations (63436828)
    1 The max number of pages examined for an allocation
    0 Threads waited on page I/O
    0 The number of times a sync is interrupted
    Pool File: lastPoints
    8192 Page size
    0 Requested pages mapped into the process' address space
    2127M Requested pages found in the cache (97%)
    57M Requested pages not found in the cache (57565917)
    6371509 Pages created in the cache
    57M Pages read into the cache (57565917)
    75M Pages written from the cache to the backing file (75763673)
    $ db_stat -l
    0x40988 Log magic number
    16 Log version number
    31KB 256B Log record cache size
    0 Log file mode
    10Mb Current log file size
    856M Records entered into the log (856697337)
    941GB 371MB 67KB 112B Log bytes written
    2GB 262MB 998KB 478B Log bytes written since last checkpoint
    31M Total log file I/O writes (31624157)
    31M Total log file I/O writes due to overflow (31527047)
    97136 Total log file flushes
    686 Total log file I/O reads
    96414 Current log file number
    4482953 Current log file offset
    96414 On-disk log file number
    4482862 On-disk log file offset
    1 Maximum commits in a log flush
    1 Minimum commits in a log flush
    160KB Log region size
    195 The number of region locks that required waiting (0%)
    $ db_stat -c
    7 Last allocated locker ID
    0x7fffffff Current maximum unused locker ID
    9 Number of lock modes
    2000 Maximum number of locks possible
    2000 Maximum number of lockers possible
    2000 Maximum number of lock objects possible
    160 Number of lock object partitions
    0 Number of current locks
    1218 Maximum number of locks at any one time
    5 Maximum number of locks in any one bucket
    0 Maximum number of locks stolen by for an empty partition
    0 Maximum number of locks stolen for any one partition
    0 Number of current lockers
    8 Maximum number of lockers at any one time
    0 Number of current lock objects
    1218 Maximum number of lock objects at any one time
    5 Maximum number of lock objects in any one bucket
    0 Maximum number of objects stolen by for an empty partition
    0 Maximum number of objects stolen for any one partition
    400M Total number of locks requested (400062331)
    400M Total number of locks released (400062331)
    0 Total number of locks upgraded
    1 Total number of locks downgraded
    0 Lock requests not available due to conflicts, for which we waited
    0 Lock requests not available due to conflicts, for which we did not wait
    0 Number of deadlocks
    0 Lock timeout value
    0 Number of locks that have timed out
    0 Transaction timeout value
    0 Number of transactions that have timed out
    1MB 544KB The size of the lock region
    0 The number of partition locks that required waiting (0%)
    0 The maximum number of times any partition lock was waited for (0%)
    0 The number of object queue operations that required waiting (0%)
    0 The number of locker allocations that required waiting (0%)
    0 The number of region locks that required waiting (0%)
    5 Maximum hash bucket length
    $ db_stat -CA
    Default locking region information:
    7 Last allocated locker ID
    0x7fffffff Current maximum unused locker ID
    9 Number of lock modes
    2000 Maximum number of locks possible
    2000 Maximum number of lockers possible
    2000 Maximum number of lock objects possible
    160 Number of lock object partitions
    0 Number of current locks
    1218 Maximum number of locks at any one time
    5 Maximum number of locks in any one bucket
    0 Maximum number of locks stolen by for an empty partition
    0 Maximum number of locks stolen for any one partition
    0 Number of current lockers
    8 Maximum number of lockers at any one time
    0 Number of current lock objects
    1218 Maximum number of lock objects at any one time
    5 Maximum number of lock objects in any one bucket
    0 Maximum number of objects stolen by for an empty partition
    0 Maximum number of objects stolen for any one partition
    400M Total number of locks requested (400062331)
    400M Total number of locks released (400062331)
    0 Total number of locks upgraded
    1 Total number of locks downgraded
    0 Lock requests not available due to conflicts, for which we waited
    0 Lock requests not available due to conflicts, for which we did not wait
    0 Number of deadlocks
    0 Lock timeout value
    0 Number of locks that have timed out
    0 Transaction timeout value
    0 Number of transactions that have timed out
    1MB 544KB The size of the lock region
    0 The number of partition locks that required waiting (0%)
    0 The maximum number of times any partition lock was waited for (0%)
    0 The number of object queue operations that required waiting (0%)
    0 The number of locker allocations that required waiting (0%)
    0 The number of region locks that required waiting (0%)
    5 Maximum hash bucket length
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Lock REGINFO information:
    Lock Region type
    5 Region ID
    __db.005 Region name
    0x2accda678000 Region address
    0x2accda678138 Region primary address
    0 Region maximum allocation
    0 Region allocated
    Region allocations: 6006 allocations, 0 failures, 0 frees, 1 longest
    Allocations by power-of-two sizes:
    1KB 6002
    2KB 0
    4KB 0
    8KB 0
    16KB 1
    32KB 0
    64KB 2
    128KB 0
    256KB 1
    512KB 0
    1024KB 0
    REGION_JOIN_OK Region flags
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Lock region parameters:
    524317 Lock region region mutex [0/9 0% 5091/47054587432128]
    2053 locker table size
    2053 object table size
    944 obj_off
    226120 locker_off
    0 need_dd
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Lock conflict matrix:
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Locks grouped by lockers:
    Locker Mode Count Status ----------------- Object ---------------
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Locks grouped by object:
    Locker Mode Count Status ----------------- Object ---------------
    Diagnosis:
    I'm seeing way to much lock contention on the Java Garbage Collector threads and also the VM thread when I strace my java process and I don't understand the behavior.
    We are spending more than 95% of the time trying to acquire locks and I don't know what these locks are. Any info here would help.
    Earlier I thought the overflow pages were the problem as 100KB data size was exceeding all overflow page limits. So, I implemented duplicate keys concept by chunking of my data to fit to overflow page limits.
    Now I don't see any overflow pages in my system but I still see bad bdb read performance.
    $ strace -c -f -p 5642 --->(607 times the lock timed out, errors)
    Process 5642 attached with 45 threads - interrupt to quit
    % time     seconds  usecs/call     calls    errors syscall
    98.19    7.670403        2257      3398       607 futex
     0.84    0.065886           8      8423           pread
     0.69    0.053980        4498        12           fdatasync
     0.22    0.017094           5      3778           pwrite
     0.05    0.004107           5       808           sched_yield
     0.00    0.000120          10        12           read
     0.00    0.000110           9        12           open
     0.00    0.000089           7        12           close
     0.00    0.000025           0      1431           clock_gettime
     0.00    0.000000           0        46           write
     0.00    0.000000           0         1         1 stat
     0.00    0.000000           0        12           lseek
     0.00    0.000000           0        26           mmap
     0.00    0.000000           0        88           mprotect
     0.00    0.000000           0        24           fcntl
    100.00    7.811814                 18083       608 total
    The above stats show that there is too much time spent locking (futex calls) and I don't understand that because
    the application is really single-threaded. I have turned on asynchronous transactions so the writes might be
    flushed asynchronously in the background but spending that much time locking and timing out seems wrong.
    So, there is possibly something I'm not setting or something weird with the way JVM is behaving on my box.
    I grep-ed for futex calls in one of my strace log snippet and I see that there is a VM thread that grabbed the mutex
    maximum number(223) of times and followed by Garbage Collector threads: the following is the lock counts and thread-pids
    within the process:
    These are the 10 GC threads (each thread has grabbed lock on an avg 85 times):
      86 [8538]
      85 [8539]
      91 [8540]
      91 [8541]
      92 [8542]
      87 [8543]
      90 [8544]
      96 [8545]
      87 [8546]
      97 [8547]
      96 [8548]
      91 [8549]
      91 [8550]
      80 [8552]
    VM Periodic Task Thread" prio=10 tid=0x00002aaaf4065000 nid=0x2180 waiting on condition (Main problem??)
     223 [8576] ==> grabbing a lock 223 times -- not sure why this is happening…
    "pool-2-thread-1" prio=10 tid=0x00002aaaf44b7000 nid=0x21c8 runnable [0x0000000042aa8000] -- main worker thread
       34 [8648] (main thread grabs futex only 34 times when compared to all the other threads)
    The load average seems ok; though my system thinks it has very less memory left and that
    I think is because its using up a lot of memory for the file system cache?
    top - 23:52:00 up 6 days, 8:41, 1 user, load average: 3.28, 3.40, 3.44
    Tasks: 229 total, 1 running, 228 sleeping, 0 stopped, 0 zombie
    Cpu(s): 3.2%us, 0.9%sy, 0.0%ni, 87.5%id, 8.3%wa, 0.0%hi, 0.1%si, 0.0%st
    Mem: 98999820k total, 98745988k used, 253832k free, 530372k buffers
    Swap: 18481144k total, 1304k used, 18479840k free, 89854800k cached
    PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
    8424 rchitta 16 0 7053m 6.2g 4.4g S 18.3 6.5 401:01.88 java
    8422 rchitta 15 0 7011m 6.1g 4.4g S 14.6 6.5 528:06.92 java
    8423 rchitta 15 0 6989m 6.1g 4.4g S 5.7 6.5 615:28.21 java
    $ java -version
    java version "1.6.0_21"
    Java(TM) SE Runtime Environment (build 1.6.0_21-b06)
    Java HotSpot(TM) 64-Bit Server VM (build 17.0-b16, mixed mode)
    Maybe I should make my application a Concurrent Data Store app as there is really only one thread doing the writes and reads. But I would like
    to understand why my process is spending so much time in locking.
    Can I try any other options? How do I prevent such heavy locking from happening? Has anyone seen this kind of behavior? Maybe this is
    all normal. I'm pretty new to using BDB.
    If there is a way to disable locking that would also work as there is only one thread that's really doing all the job.
    Should I disable the file system cache? One thing is that my application does not utilize cache very well as once I visit a key, I don't visit that
    key again for a very long time so its very possible that the key has to be read again from the disk.
    It is possible that I'm thinking this completely wrong and focussing too much on locking behavior and the problem is else where.
    Any thoughts/suggestions etc are welcome. Your help on this is much appreciated.
    Thanks,
    Rama

    Hi,
    Looks like you're using BDB, not BDB JE, and this is the BDB JE forum. Could you please repost here?:
    Berkeley DB
    Thanks,
    mark

  • Performance Problems with UI Element Tabstrip and IE

    Hi,
    I use the UI Element "Tabstrip" in a Java Web Dynpro Application. The application gets slower the more I jump from one tab to the next. All the other UI Elements (e.g. ComboBox, TextField) are affected, too.
    A system trace on the Web AS seems to indicate that it is a client problem.
    We are using the Internet Explorer 6.0.2900.2180 XP SP2 and the J2EE Engine 6.40.
    It seems to be a problem with the Tabstrip element in combination with the IE (Firefox 3 works fine). We created a test application with only the tabstrip element, three tabs, and a combo box. After several clicks on the tabs, the test application gets slower...
    Does anyone has had the same problem with the tabstrips, or anyone an idea what might be the reason??
    Thanks,
    Sabine

    Open an OSS message (BC-WD-UR).
    Armin

  • 2.1 RC1 - Performance problem, typing lag in EA2 and RC1

    Hi!
    I have searched and searched the forum but cannot find any reference to this specific issue.. Please accept my apologies if this has already been posted as I can't believe there are no other occurences of this out there
    On moving from 1.5.5 to either 2.1 EA2 or RC1, I experience massive performance issues making SQL Developer unusable.
    Basically, just typing into a SQL worksheet consumes most of my machine CPU and results in a huge amount of lag: typing a simple SELECT statement will take 10 to 20 seconds just for the text to catch up with my typing! It's infuriating!
    I've disabled all of the Code Insight options, ensure the 'Select default path to look for scripts' field is empty and I've tried both the JRE inclusive and exclusive versions all with the same result. If I fire up 1.5.5 I'm immediately back in business with no lag between typing and the display.
    My laptop is well spec'd - XP Pro, 2Ghz dual core, 2 GB RAM
    Any thoughts other than what I've tried above?
    Many thanks in advance!
    Edited by: user4523743 on 07-Dec-2009 03:21

    Okay, this is weird. I can get around this by setting the 'Select default path for scripts' preference to something other than blank!
    I'm wondering if this is because a group policy sets the default homedrive / My Documents folder to something on a network share. Could it be that having this value blank is causing SQL Developer to poll this share, and therefore the network, causing the performance issue?
    As it is, setting the value to something on the local drive (C:\) seems to fix it - contrary to what other posts have had to say on the matter of this preference!

  • HTML character entities problem in saved regex search and replace query

    I have a many search and replace regular expression queries (.dwr files) that I have saved. I have a problem specifically with saved queries that contain HTML entities such as "& nbsp ; " or "& shy ;" (spaces added otherwise code doesn't render in browser). For example if I use the following search:
    ([\d]{3}& shy ;[\d]{3}& shy ;[\d]{4}|[\d]{3}& nbsp ;[\d]{3}& nbsp ;[\d]{4})
    (which searches for numbers in the 888-555-1234 or 888 555 1234 formats)
    This will work fine if I manually enter it into the search text area. However if I save it to file and reload it, it will no longer work because the ­ and   characters are now displayed as " " (space) and "-"(shy) rendering the saved query useless as it's no longer searching for the code. I have some fairly long and complex queries and this is becoming a problem.
    Thanks for any help.
    I'm currently using Dreaweaver CS4 at home and CS5.5 at work.

    Thanks for your reply Kenneth, but that is not what I'm trying to accomplish. I'm looking for the HTML entities that exist in the source code which are & shy ; and & nbsp ; (without the spaces). As I mentioned above, if I enter them manually in the search box, I will get the corrrect results. If I save the search and then reload it, the special characters are no longer in HTML and the search is now useless.
    Just for example again
    In an open document in code view insert a number in the format (without the spaces): 888& nbsp;888& nbsp ;8888
    Open a search dialog box and enter (without the spaces): [\d]{3}& nbsp ;[\d]{3}& nbsp ;[\d]{4}
    The search will find that entry.
    Save search as phone.dwr for example. Then load it and try the search again. It won't work because upon loading the search Dreamweaver replaces the HTML code which was saved with the rendered HTML. So now the search shows up as: [\d]{3} [\d]{3} [\d]{4} which will not find the string with hard coded non-breaking spaces that I'm looking for.
    Basically I want to be able to save a search query for reuse. When I load a search query, I want it to be exactly what I saved, not something that DW has rendered (that doesn't work).

  • Improving performance in a merge between local and remote query

    If you try to merge two queries, one local (e.g. table in Excel) and one remote (table in SQL Server), the entire remote table is loaded in memory in order to apply the NestedJoin condition. This could be very slow. In my case, the goal is to import only
    those rows that have a product name listed in a local Excel table.
    I used the SelectRows by using the list of values of the local query (having only one column) in order to apply an "IN ('value1', 'value2', ...)" condition in the SQL statement generated by Power Query (see examples below).
    Questions:
    Is there another way to do that in "M"?
    Is there a way to build such a query (filter a table by using values obtained in another query) by using the user interface?
    Is this a scenario that could be better optimized in the future by improving query folding made by Power Query?
    Thanks for the feedback!
    Local Query
    let
        LocalQuery = Excel.CurrentWorkbook(){[Name="LocalTable"]}[Content]
    in
        LocalQuery
    Remote Query
    let
        Source = Sql.Databases("servername"),
        Database = Source{[Name="databasename"]}[Data],
        RemoteQuery = OasisDataMart{[Schema="schemaname",Item="tablename"]}[Data]
    in
        RemoteQuery
    Merge Query (from Power Query user interface)
    let
        Merge = Table.NestedJoin(LocalQuery,{"ProductName"},RemoteQuery,{"ProductName"},"NewColumn",JoinKind.Inner),
        #"Expand NewColumn" = Table.ExpandTableColumn(Merge, "NewColumn", {"Description", "Price"}, {"NewColumn.Description", "NewColumn.Price"})
    in
        #"Expand NewColumn"
    Alternative merge approach (editing M - is it possible in user interface?)
    let
        #"Filtered Rows" = Table.SelectRows(RemoteQuery, each List.Contains ( Table.ToList(LocalQuery), [ProductName] ))
    in
        #"Filtered Rows"
    Marco Russo (Blog,
    Twitter,
    LinkedIn) - sqlbi.com:
    Articles, Videos,
    Tools, Consultancy,
    Training
    Format with DAX Formatter and design with
    DAX Patterns. Learn
    Power Pivot and SSAS Tabular.

    Bingo! You've find a serious performance issue!
    The very same result can be produced in a fast or slow way.
    Slow technique: do the RemoveColumns before the SelectRows (maybe you don't have to apply any transformations to the table you want to filter before the SelectRows - I haven't tested this):
    let
        Source = Sql.Databases(".\k12"),
        AdventureWorksDW2012 = Source{[Name="AdventureWorksDW2012"]}[Data],
        dbo_FactInternetSales = AdventureWorksDW2012{[Schema="dbo",Item="FactInternetSales"]}[Data],
        #"Removed Columns" = Table.RemoveColumns(dbo_FactInternetSales,{"SalesOrderLineNumber", "RevisionNumber", "OrderQuantity", "UnitPrice", "ExtendedAmount", "UnitPriceDiscountPct", "DiscountAmount", "ProductStandardCost", "TotalProductCost",
    "SalesAmount", "TaxAmt", "Freight", "CarrierTrackingNumber", "CustomerPONumber", "OrderDate", "DueDate", "ShipDate", "DimCurrency", "DimCustomer", "DimDate(DueDateKey)", "DimDate(OrderDateKey)", "DimDate(ShipDateKey)", "DimProduct", "DimPromotion", "DimSalesTerritory",
    "FactInternetSalesReason"}),
        #"Filtered Rows" = Table.SelectRows(#"Removed Columns", each List.Contains(Selection[ProductKey],[ProductKey]))
    in
        #"Filtered Rows"
    Fast technique: do the RemoveColumns after the SelectRows
    let
        Source = Sql.Databases(".\k12"),
        AdventureWorksDW2012 = Source{[Name="AdventureWorksDW2012"]}[Data],
        dbo_FactInternetSales = AdventureWorksDW2012{[Schema="dbo",Item="FactInternetSales"]}[Data],
        #"Filtered Rows" = Table.SelectRows(dbo_FactInternetSales, each List.Contains(Selection[ProductKey],[ProductKey])),
        #"Removed Columns" = Table.RemoveColumns(#"Filtered Rows",{"SalesOrderLineNumber", "RevisionNumber", "OrderQuantity", "UnitPrice", "ExtendedAmount", "UnitPriceDiscountPct", "DiscountAmount", "ProductStandardCost", "TotalProductCost",
    "SalesAmount", "TaxAmt", "Freight", "CarrierTrackingNumber", "CustomerPONumber", "OrderDate", "DueDate", "ShipDate", "DimCurrency", "DimCustomer", "DimDate(DueDateKey)", "DimDate(OrderDateKey)", "DimDate(ShipDateKey)", "DimProduct", "DimPromotion", "DimSalesTerritory",
    "FactInternetSalesReason"})
    in
        #"Removed Columns"
    I think that Power Query team should take a look at this.
    Thanks!
    Marco Russo (Blog,
    Twitter,
    LinkedIn) - sqlbi.com:
    Articles, Videos,
    Tools, Consultancy,
    Training
    Format with DAX Formatter and design with
    DAX Patterns. Learn
    Power Pivot and SSAS Tabular.

  • Performance Problems Bex 7.0 and Office 2007 Workbooks

    Hi
    we had a performance Problem with Bex 7.0 and Worksbooks in Office 2007.
    The Workbooks are created with Office 2003 and runs with good performance but in Office 2007 the performance is inacceptable.
    E.g. open Workbook with Office 2003   --    30 seconds
           open Workbook with Office 2007   --    15 minutes
    We do everything what we find in SAP Notes, Whitepapers oder SDN Messages.
    For Example:
    - We installed all Excel Patches witch descriped in: Microsoft Excel 2007 &
    SAP Business Explorer Compatibility
    - We set the optimize X: RS_FRONTEND_INIT setting u2018ANA_USE_OPTIMIZE_STG = Xu2019
    - We open worksbooks in Office 2007 with the repair Flag.
    - We used Flag open in XLS format
    But same Workbooks are extrem slow.
    We try to create a new Workbook with Office 2007 and it runs with good performance.
    But there are 500 Workbooks we didn`t wont to create all new.
    System Information:
    BW: 7.0 Netweaver 7.01 BI_CONT 7.05
    Client: SAP Gui 7.10 BI Explorer: 902
    Thank your for your Help.
    Edited by: Carsten Ziemann on Feb 2, 2011 4:36 PM

    Hello Carsten,
    Try to use Workbook compression:
      -  Open the specific workbook in BEx Analyzer
      -  Open Workbook Settings dailog
      -  Check "Use Optimized Storage"
      -  Click on OK Button
      -  Save the workbook
    But also, your front end tools are on a very old version.
    I would like to recommend you to install the latest patch of SAPGUI 7.20 and Business Explore 7.20.
    Front End Version 7.10 will be supported until April 2011.
    But, if you want to continue using 7.10, update to latest patch:
    http://service.sap.com/swdc
    > Support Packages and Patches
    > Browse our Download Catalog
      > SAP Frontend Components
    > SAP GUI FOR WINDOWS
    > SAP GUI FOR WINDOWS 7.10 CORE
    > Win32
    _ > gui710_20-10002995.exe
       |  > BI ADDON FOR SAP GUI
       |  > BI 7.0 ADDON FOR SAP GUI 7.10
       |_ > bi710sp14_1400-10004472.exe
    Cheers,
    Edward John

  • Performance problems in livecache

    Hi All,
    we are facing huge performance loss in livecache, job are taking almost 10 x longer then usuall.
    SCM/Livecache system is not giving any clue / Hint why (Performance loss).
    We are using following release:
    1. liveCache Version      X64/LIX86 7.6.04   Build 015-123-189-221
    2. Current ABAP SP        LCAPPS 2005_700: patch 000
    3. SCM 5.00
    4. Oracle 10.2.0.4.0
    Any help is highly appreciated
    Thanks
    sahmad

    Hi,
    Check these notes:
    Note 497289 - Performance when reading shipment
    Note 801419 - Performance problems in shipment (cost) processing
    and related notes
    Too, check if you have implemented BADI's related with shipment costs.
    Other way is to do a trace with ST05 to find the bottleneck. Check with a Basis consultant and perhaps you can solve it with a secondary index. But first check the first notes.
    Regards,
    Eduardo

  • Performance problem using OBJECT tag

    I have a performance problem using the java plugin and was wondering if anyone else was has seen the same thing. I have a rather complex applet that interacts with java script in a web page using the LiveConnect API. The applet both calls javascript in the page and is called by java script.
    Im using IE6 with the java plugin that ships with the 1.4.2_06 JVM. I have noticed that if I deploy the applet using the OBJECT tags, the application seems the trash everytime I call a java method on the applet from javascript. When I deplot the same applet using the APPLET tag the perfomance is much better. I would like to use the OBJECT tag because it applet bahaves better and I have more control over the caching.
    This problem seems to be on the boundaries of IE6, JScript, the JVM and my Applet (and I suppose any could be the real culprit). My application is IE5+ specific so I can not test the applet in isolation from the surround HTML/JavaScript (for example in another browser).
    Does anyone have any idea?
    thanks in advance.
    dennis.

    I have a performance problem using the java plugin and was wondering if anyone else was has seen the same thing. I have a rather complex applet that interacts with java script in a web page using the LiveConnect API. The applet both calls javascript in the page and is called by java script.
    Im using IE6 with the java plugin that ships with the 1.4.2_06 JVM. I have noticed that if I deploy the applet using the OBJECT tags, the application seems the trash everytime I call a java method on the applet from javascript. When I deplot the same applet using the APPLET tag the perfomance is much better. I would like to use the OBJECT tag because it applet bahaves better and I have more control over the caching.
    This problem seems to be on the boundaries of IE6, JScript, the JVM and my Applet (and I suppose any could be the real culprit). My application is IE5+ specific so I can not test the applet in isolation from the surround HTML/JavaScript (for example in another browser).
    Does anyone have any idea?
    thanks in advance.
    dennis.

Maybe you are looking for