BDB read performance problem: lock contention between GC and VM threads

Problem: BDB read performance is really bad when the size of the BDB crosses 20GB. Once the database crosses 20GB or near there, it takes more than one hour to read/delete/add 200K keys.
After a point, of these 200K keys there are about 15-30K keys that are new and this number eventually should come down and there should not be any new keys after a point.
Application:
Transactional Data Store application. Single threaded process, that's trying to read one key's data, delete the data and add new data. The keys are really small (20 bytes) and the data is large (grows from 1KB to 100KB)
On on machine, I have a total of 3 processes running with each process accessing its own BDB on a separate RAID1+0 drive. So, according to me there should really be no disk i/o wait that's slowing down the reads.
After a point (past 20GB), There are about 4-5 million keys in my BDB and the data associated with each key could be anywhere between 1KB to 100KB. Eventually every key will have 100KB data associated with it.
Hardware:
16 core Intel Xeon, 96GB of RAM, 8 drive, running 2.6.18-194.26.1.0.1.el5 #1 SMP x86_64 x86_64 x86_64 GNU/Linux
BDB config: BTREE
bdb version: 4.8.30
bdb cache size: 4GB
bdb page size: experimented with 8KB, 64KB.
3 processes, each process accesses its own BDB on a separate RAIDed(1+0) drive.
envConfig.setAllowCreate(true);
envConfig.setTxnNoSync(ourConfig.asynchronous);
envConfig.setThreaded(true);
envConfig.setInitializeLocking(true);
envConfig.setLockDetectMode(LockDetectMode.DEFAULT);
When writing to BDB: (Asynchrounous transactions)
TransactionConfig tc = new TransactionConfig();
tc.setNoSync(true);
When reading from BDB (Allow reading from Uncommitted pages):
CursorConfig cc = new CursorConfig();
cc.setReadUncommitted(true);
BDB stats: BDB size 49GB
$ db_stat -m
3GB 928MB Total cache size
1 Number of caches
1 Maximum number of caches
3GB 928MB Pool individual cache size
0 Maximum memory-mapped file size
0 Maximum open file descriptors
0 Maximum sequential buffer writes
0 Sleep after writing maximum sequential buffers
0 Requested pages mapped into the process' address space
2127M Requested pages found in the cache (97%)
57M Requested pages not found in the cache (57565917)
6371509 Pages created in the cache
57M Pages read into the cache (57565917)
75M Pages written from the cache to the backing file (75763673)
60M Clean pages forced from the cache (60775446)
2661382 Dirty pages forced from the cache
0 Dirty pages written by trickle-sync thread
500593 Current total page count
500593 Current clean page count
0 Current dirty page count
524287 Number of hash buckets used for page location
4096 Assumed page size used
2248M Total number of times hash chains searched for a page (2248788999)
9 The longest hash chain searched for a page
2669M Total number of hash chain entries checked for page (2669310818)
0 The number of hash bucket locks that required waiting (0%)
0 The maximum number of times any hash bucket lock was waited for (0%)
0 The number of region locks that required waiting (0%)
0 The number of buffers frozen
0 The number of buffers thawed
0 The number of frozen buffers freed
63M The number of page allocations (63937431)
181M The number of hash buckets examined during allocations (181211477)
16 The maximum number of hash buckets examined for an allocation
63M The number of pages examined during allocations (63436828)
1 The max number of pages examined for an allocation
0 Threads waited on page I/O
0 The number of times a sync is interrupted
Pool File: lastPoints
8192 Page size
0 Requested pages mapped into the process' address space
2127M Requested pages found in the cache (97%)
57M Requested pages not found in the cache (57565917)
6371509 Pages created in the cache
57M Pages read into the cache (57565917)
75M Pages written from the cache to the backing file (75763673)
$ db_stat -l
0x40988 Log magic number
16 Log version number
31KB 256B Log record cache size
0 Log file mode
10Mb Current log file size
856M Records entered into the log (856697337)
941GB 371MB 67KB 112B Log bytes written
2GB 262MB 998KB 478B Log bytes written since last checkpoint
31M Total log file I/O writes (31624157)
31M Total log file I/O writes due to overflow (31527047)
97136 Total log file flushes
686 Total log file I/O reads
96414 Current log file number
4482953 Current log file offset
96414 On-disk log file number
4482862 On-disk log file offset
1 Maximum commits in a log flush
1 Minimum commits in a log flush
160KB Log region size
195 The number of region locks that required waiting (0%)
$ db_stat -c
7 Last allocated locker ID
0x7fffffff Current maximum unused locker ID
9 Number of lock modes
2000 Maximum number of locks possible
2000 Maximum number of lockers possible
2000 Maximum number of lock objects possible
160 Number of lock object partitions
0 Number of current locks
1218 Maximum number of locks at any one time
5 Maximum number of locks in any one bucket
0 Maximum number of locks stolen by for an empty partition
0 Maximum number of locks stolen for any one partition
0 Number of current lockers
8 Maximum number of lockers at any one time
0 Number of current lock objects
1218 Maximum number of lock objects at any one time
5 Maximum number of lock objects in any one bucket
0 Maximum number of objects stolen by for an empty partition
0 Maximum number of objects stolen for any one partition
400M Total number of locks requested (400062331)
400M Total number of locks released (400062331)
0 Total number of locks upgraded
1 Total number of locks downgraded
0 Lock requests not available due to conflicts, for which we waited
0 Lock requests not available due to conflicts, for which we did not wait
0 Number of deadlocks
0 Lock timeout value
0 Number of locks that have timed out
0 Transaction timeout value
0 Number of transactions that have timed out
1MB 544KB The size of the lock region
0 The number of partition locks that required waiting (0%)
0 The maximum number of times any partition lock was waited for (0%)
0 The number of object queue operations that required waiting (0%)
0 The number of locker allocations that required waiting (0%)
0 The number of region locks that required waiting (0%)
5 Maximum hash bucket length
$ db_stat -CA
Default locking region information:
7 Last allocated locker ID
0x7fffffff Current maximum unused locker ID
9 Number of lock modes
2000 Maximum number of locks possible
2000 Maximum number of lockers possible
2000 Maximum number of lock objects possible
160 Number of lock object partitions
0 Number of current locks
1218 Maximum number of locks at any one time
5 Maximum number of locks in any one bucket
0 Maximum number of locks stolen by for an empty partition
0 Maximum number of locks stolen for any one partition
0 Number of current lockers
8 Maximum number of lockers at any one time
0 Number of current lock objects
1218 Maximum number of lock objects at any one time
5 Maximum number of lock objects in any one bucket
0 Maximum number of objects stolen by for an empty partition
0 Maximum number of objects stolen for any one partition
400M Total number of locks requested (400062331)
400M Total number of locks released (400062331)
0 Total number of locks upgraded
1 Total number of locks downgraded
0 Lock requests not available due to conflicts, for which we waited
0 Lock requests not available due to conflicts, for which we did not wait
0 Number of deadlocks
0 Lock timeout value
0 Number of locks that have timed out
0 Transaction timeout value
0 Number of transactions that have timed out
1MB 544KB The size of the lock region
0 The number of partition locks that required waiting (0%)
0 The maximum number of times any partition lock was waited for (0%)
0 The number of object queue operations that required waiting (0%)
0 The number of locker allocations that required waiting (0%)
0 The number of region locks that required waiting (0%)
5 Maximum hash bucket length
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Lock REGINFO information:
Lock Region type
5 Region ID
__db.005 Region name
0x2accda678000 Region address
0x2accda678138 Region primary address
0 Region maximum allocation
0 Region allocated
Region allocations: 6006 allocations, 0 failures, 0 frees, 1 longest
Allocations by power-of-two sizes:
1KB 6002
2KB 0
4KB 0
8KB 0
16KB 1
32KB 0
64KB 2
128KB 0
256KB 1
512KB 0
1024KB 0
REGION_JOIN_OK Region flags
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Lock region parameters:
524317 Lock region region mutex [0/9 0% 5091/47054587432128]
2053 locker table size
2053 object table size
944 obj_off
226120 locker_off
0 need_dd
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Lock conflict matrix:
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Locks grouped by lockers:
Locker Mode Count Status ----------------- Object ---------------
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Locks grouped by object:
Locker Mode Count Status ----------------- Object ---------------
Diagnosis:
I'm seeing way to much lock contention on the Java Garbage Collector threads and also the VM thread when I strace my java process and I don't understand the behavior.
We are spending more than 95% of the time trying to acquire locks and I don't know what these locks are. Any info here would help.
Earlier I thought the overflow pages were the problem as 100KB data size was exceeding all overflow page limits. So, I implemented duplicate keys concept by chunking of my data to fit to overflow page limits.
Now I don't see any overflow pages in my system but I still see bad bdb read performance.
$ strace -c -f -p 5642 --->(607 times the lock timed out, errors)
Process 5642 attached with 45 threads - interrupt to quit
% time     seconds  usecs/call     calls    errors syscall
98.19    7.670403        2257      3398       607 futex
 0.84    0.065886           8      8423           pread
 0.69    0.053980        4498        12           fdatasync
 0.22    0.017094           5      3778           pwrite
 0.05    0.004107           5       808           sched_yield
 0.00    0.000120          10        12           read
 0.00    0.000110           9        12           open
 0.00    0.000089           7        12           close
 0.00    0.000025           0      1431           clock_gettime
 0.00    0.000000           0        46           write
 0.00    0.000000           0         1         1 stat
 0.00    0.000000           0        12           lseek
 0.00    0.000000           0        26           mmap
 0.00    0.000000           0        88           mprotect
 0.00    0.000000           0        24           fcntl
100.00    7.811814                 18083       608 total
The above stats show that there is too much time spent locking (futex calls) and I don't understand that because
the application is really single-threaded. I have turned on asynchronous transactions so the writes might be
flushed asynchronously in the background but spending that much time locking and timing out seems wrong.
So, there is possibly something I'm not setting or something weird with the way JVM is behaving on my box.
I grep-ed for futex calls in one of my strace log snippet and I see that there is a VM thread that grabbed the mutex
maximum number(223) of times and followed by Garbage Collector threads: the following is the lock counts and thread-pids
within the process:
These are the 10 GC threads (each thread has grabbed lock on an avg 85 times):
  86 [8538]
  85 [8539]
  91 [8540]
  91 [8541]
  92 [8542]
  87 [8543]
  90 [8544]
  96 [8545]
  87 [8546]
  97 [8547]
  96 [8548]
  91 [8549]
  91 [8550]
  80 [8552]
VM Periodic Task Thread" prio=10 tid=0x00002aaaf4065000 nid=0x2180 waiting on condition (Main problem??)
 223 [8576] ==> grabbing a lock 223 times -- not sure why this is happening…
"pool-2-thread-1" prio=10 tid=0x00002aaaf44b7000 nid=0x21c8 runnable [0x0000000042aa8000] -- main worker thread
   34 [8648] (main thread grabs futex only 34 times when compared to all the other threads)
The load average seems ok; though my system thinks it has very less memory left and that
I think is because its using up a lot of memory for the file system cache?
top - 23:52:00 up 6 days, 8:41, 1 user, load average: 3.28, 3.40, 3.44
Tasks: 229 total, 1 running, 228 sleeping, 0 stopped, 0 zombie
Cpu(s): 3.2%us, 0.9%sy, 0.0%ni, 87.5%id, 8.3%wa, 0.0%hi, 0.1%si, 0.0%st
Mem: 98999820k total, 98745988k used, 253832k free, 530372k buffers
Swap: 18481144k total, 1304k used, 18479840k free, 89854800k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
8424 rchitta 16 0 7053m 6.2g 4.4g S 18.3 6.5 401:01.88 java
8422 rchitta 15 0 7011m 6.1g 4.4g S 14.6 6.5 528:06.92 java
8423 rchitta 15 0 6989m 6.1g 4.4g S 5.7 6.5 615:28.21 java
$ java -version
java version "1.6.0_21"
Java(TM) SE Runtime Environment (build 1.6.0_21-b06)
Java HotSpot(TM) 64-Bit Server VM (build 17.0-b16, mixed mode)
Maybe I should make my application a Concurrent Data Store app as there is really only one thread doing the writes and reads. But I would like
to understand why my process is spending so much time in locking.
Can I try any other options? How do I prevent such heavy locking from happening? Has anyone seen this kind of behavior? Maybe this is
all normal. I'm pretty new to using BDB.
If there is a way to disable locking that would also work as there is only one thread that's really doing all the job.
Should I disable the file system cache? One thing is that my application does not utilize cache very well as once I visit a key, I don't visit that
key again for a very long time so its very possible that the key has to be read again from the disk.
It is possible that I'm thinking this completely wrong and focussing too much on locking behavior and the problem is else where.
Any thoughts/suggestions etc are welcome. Your help on this is much appreciated.
Thanks,
Rama

Hi,
Looks like you're using BDB, not BDB JE, and this is the BDB JE forum. Could you please repost here?:
Berkeley DB
Thanks,
mark

Similar Messages

  • Problems using RMI between linux and windows.

    I have problems using RMI between linux and windows.
    This is my scenario:
    - Server running on linux pc
    - Clients running on linux and windows PCs
    When a linux client disconnect, first time that server try to call a method of this client, a rmi.ConnectException is generated so server can catch it, mark the client as disconnected and won't communicate with it anymore.
    When a windows client (tested on XP and Vista) disconnect, no exceptions are generated (I tryed to catch all the rmi exception), so server cannot know that client is disconnected and hangs trying to communicate with the windows client.
    Any ideas?
    Thanks in advance.
    cambieri

    Thanks for your reply.
    Yes, we are implementing a sort of callback using Publisher (remote Observable) and Subscribers (remote Observer). The pattern and relative code is very well described at this link: http://www2.sys-con.com/ITSG/virtualcd/java/archives/0210/schwell/index.html (look at the notifySubscribers(Object pub, Object code) function).
    Everything works great, the only problem is this: when a Publisher that reside on a Linux server try to notify something to a "dead" Subscriber that reside on a Windows PC it does't receive the usual ConnectException and so tends to hang.
    As a workaround we have solved now starting a new Thread for each update (notification), so only that Thread is blocked (until the timeout i guess) and not the entire "notifySubscribers" function (that contact all the Subscribers).
    Beside this, using the Thread seem to give us better performance.
    Is that missed ConnectException a bug? Or we are just making some mistake?
    We are using java 6 and when both client and server are Linux or Windows the ConnectException always happen and we don't have any problem.
    I hope that now this information are enough.
    Thanks again and greetings.
    O.C.

  • Migrate BI+ Workspace content between production and test (S9203)

    Is there an (easy) solution to migrate contents between production and test in BI+ Workspace S9.2.9.0.3? I know I could extract the contents and then just import but that way I lose the object permissions. BI+ migration utility only supports migrations from older versions and not between the same version.

    That isn't a problem, if you do it this way:
    have all your environment specific configurations (e.g. translation.com password and connector data) in OSGI configuration nodes in the repository.
    Use runmodes to pin certain configurations to a certain runmode.
    Have dedicated runmodes for PROD and for TEST
    In that case you could update your TEST environment like this:
    Restore the instance from PROD backup
    Change the runmode definition in the start script
    Start it up.
    Jörg

  • How to place content between header and tabs?????

    i have header part which has to be constant through out the portal but below that i have 3 links
    like I AM employee,employer,broker..
    which has to be shown only in home page above tabs..
    how can i achieve this..
    how to place content between header and tabs..:(kindly help..

    Hi Samiran
    Try these approaches and see if that works.
    1. In the Header Section, you header footer shell and add a Header Portlet. This Header Portlet associated JSP file will have all static content in the top section. In the bottom section, add these 3 links say to right hand corner. Show these links only based on some request property like isHome. Now for the main book having Home and other page associate a BackingFile. Within this backing file in the lifecycle methods like preRender or handlePostBack, get instance of BookManager and all the pages and see which page is Active. For that active page check its page definition label which will be always unique. IF the page def label is like home_page_def (this is page def label you give for home page), then set the key value in the request property like isHome=true. By only doubt is after Book backingfile is triggered, the header has to be reloaded, because only then it can pick up the request attributes.
    2. Create a brand new portlet like HomePageLinks portlet. Make its Title Property Not Visible, and other user interface properties like NoBorder, NoTheme etc. The associated JSP will have the 3 links you mentioned right aligned. You can use css styles to make it right etc. Now drop this portlet in the Header Shell area. You already have HeaderPortlet in the top, below that you will have this HomePageLinks portlet. Now associate a backing file for this Portlet to show, only if the Books current active page is Home page by comparing the def label etc as mentioned above.
    In both scenarios, only concern is when we click on different Pages, the entire portal has to be rendered right from the Top Header. Only then the backing file will set the key, and the HomePageLinks portlet can show or hide accordingly.
    Try firing an Event when the Home page is clicked. This listener for this Event can be the HomePageLinks Portlet. I guess Event mechanism should work irrespective of where the portlet is placed. In the event listner, see if you can show/hide this portlet.
    The only challenge is Header section needs to reloaded everytime you click on a Tab.
    Start putting some backing files and System.out.printlns to see if the Header section gets reloaded on click on any Tabs.
    These are just my thoughts over the top of my head. Other forum users can have better alternatives or a different version of above approaches.
    Thanks
    Ravi Jegga

  • Problem of routing between inside and outside on ASA5505

    I have a ASA5505 with mostly factory default configuration. Its license allows only two vlan interfaces (vlan 1 and vlan 2). The default config has interface vlan 1 as inside (security level 100), and interface vlan 2 as outside (security level 0 and using DHCP).
    I only changed interface vlan 1 to IP 10.10.10.1/24. After I plugged in a few hosts to vlan 1 ports and connect port Ethernet0/0 (default in vlan 2) to a live network, here are a couple of issues I found:
    a) One host I plugged in is a PC, and another host is a WAAS WAE device. Both are in vlan 1 ports. I hard coded their IP to 10.10.10.250 and 10.10.10.101, /24 subnet mask, and gateway of 10.10.10.1. I can ping from the PC to WAE but not from WAE to the PC, although the WAE has 10.10.10.250 in its ARP table. They are in the same vlan and same subnet, how could it be? Here are the ping and WAE ARP table.
    WAE#ping 10.10.10.250
    PING 10.10.10.250 (10.10.10.250) from 10.10.10.101 : 56(84) bytes of data.
    --- 10.10.10.250 ping statistics ---
    5 packets transmitted, 0 packets received, 100% packet loss
    WAE#sh arp
    Protocol Address Flags Hardware Addr Type Interface
    Internet 10.10.10.250 Adj 00:1E:37:84:C9:CE ARPA GigabitEthernet1/0
    Internet 10.10.10.10 Adj 00:14:5E:85:50:01 ARPA GigabitEthernet1/0
    Internet 10.10.10.1 Adj 00:1E:F7:7F:6E:7E ARPA GigabitEthernet1/0
    b) None of the hosts in vlan 1 in 10.10.10.0/24 can ping interface vlan 2 (address in 172.26.18.0/24 obtained via DHCP). But on ASA routing table, it has both 10.10.10.0/24 and 172.26.18.0/24, and also a default route learned via DHCP. Is ASA able to route between vlan 1 and vlan 2? (inside and outside). Any changes I can try?
    Here are ASA routing table and config of vlan 1 and vlan 2 (mostly its default).
    ASA# sh route
    Codes: C - connected, S - static, I - IGRP, R - RIP, M - mobile, B - BGP
    D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
    N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
    E1 - OSPF external type 1, E2 - OSPF external type 2, E - EGP
    i - IS-IS, L1 - IS-IS level-1, L2 - IS-IS level-2, ia - IS-IS inter area
    * - candidate default, U - per-user static route, o - ODR
    P - periodic downloaded static route
    Gateway of last resort is 172.26.18.1 to network 0.0.0.0
    C 172.26.18.0 255.255.255.0 is directly connected, outside
    C 127.1.0.0 255.255.0.0 is directly connected, _internal_loopback
    C 10.10.10.0 255.255.255.0 is directly connected, inside
    d* 0.0.0.0 0.0.0.0 [1/0] via 172.26.18.1, outside
    interface Vlan1
    nameif inside
    security-level 100
    ip address 10.10.10.1 255.255.255.0
    interface Vlan2
    nameif outside
    security-level 0
    ip address dhcp setroute
    interface Ethernet0/0
    switchport access vlan 2
    All other ports are in vlan 1 by default.

    I should have made the config easier to read. So here is what's on the ASA and the problems I have. The ASA only allows two VLAN interfaces configured (default to Int VLAN 1 - nameif inside, and Int VLAN 2 - nameif outside)
    port 0: in VLAN 2 (outside). DHCP configured. VLAN 2 pulled IP in 172.26.18.0/24, default gateway 172.26.18.1
    port 1-7: in VLAN 1 (inside). VLAN 1 IP is 10.10.10.1. I set all devices IP in VLAN 1 to 10.10.10.0/24, default gateway 10.10.10.1
    I have one PC in port 1 and one WAE device in port 2. PC IP set to 10.10.10.250 and WAE set to 10.10.10.101. PC can ping WAE but WAE can't ping PC. Both can ping default gateway.
    If I can't ping from inside interface to outside interface on ASA, how can I verify inside hosts can get to outside addresses and vise versa? I looked at ASA docs, but didn't find out how to set the routing between inside and outside. They are both connected interfaces, should they route between each other already?
    Thanks a lot

  • Performance Problems with "For all Entries" and a big internal table

    We have big Performance Problems with following Statement:
    SELECT * FROM zeedmt_zmon INTO TABLE gt_zmon_help
      FOR ALL ENTRIES IN gt_zmon_help
        WHERE
        status = 'IAI200' AND
        logdat IN gs_dat AND
        ztrack = gt_zmon_help-ztrack.
    In the internal table gt_zmon_help are over 1000000 entries.
    Anyone an Idea how to improve the Performance?
    Thank you!

    >
    Matthias Weisensel wrote:
    > We have big Performance Problems with following Statement:
    >
    >  
    SELECT * FROM zeedmt_zmon INTO TABLE gt_zmon_help
    >   FOR ALL ENTRIES IN gt_zmon_help
    >     WHERE
    >     status = 'IAI200' AND
    >     logdat IN gs_dat AND
    >     ztrack = gt_zmon_help-ztrack.
    >
    > In the internal table gt_zmon_help are over 1000000 entries.
    > Anyone an Idea how to improve the Performance?
    >
    > Thank you!
    You can't expect miracles.  With over a million entries in your itab any select is going to take a bit of time. Do you really need all these records in the itab?  How many records is the select bringing back?  I'm assuming that you have got and are using indexes on your ZEEDMT_ZMON table. 
    In this situation, I'd first of all try to think of another way of running the query and restricting the amount of data, but if this were not possible I'd just run it in the background and accept that it is going to take a long time.

  • 2.1 RC1 - Performance problem, typing lag in EA2 and RC1

    Hi!
    I have searched and searched the forum but cannot find any reference to this specific issue.. Please accept my apologies if this has already been posted as I can't believe there are no other occurences of this out there
    On moving from 1.5.5 to either 2.1 EA2 or RC1, I experience massive performance issues making SQL Developer unusable.
    Basically, just typing into a SQL worksheet consumes most of my machine CPU and results in a huge amount of lag: typing a simple SELECT statement will take 10 to 20 seconds just for the text to catch up with my typing! It's infuriating!
    I've disabled all of the Code Insight options, ensure the 'Select default path to look for scripts' field is empty and I've tried both the JRE inclusive and exclusive versions all with the same result. If I fire up 1.5.5 I'm immediately back in business with no lag between typing and the display.
    My laptop is well spec'd - XP Pro, 2Ghz dual core, 2 GB RAM
    Any thoughts other than what I've tried above?
    Many thanks in advance!
    Edited by: user4523743 on 07-Dec-2009 03:21

    Okay, this is weird. I can get around this by setting the 'Select default path for scripts' preference to something other than blank!
    I'm wondering if this is because a group policy sets the default homedrive / My Documents folder to something on a network share. Could it be that having this value blank is causing SQL Developer to poll this share, and therefore the network, causing the performance issue?
    As it is, setting the value to something on the local drive (C:\) seems to fix it - contrary to what other posts have had to say on the matter of this preference!

  • Performance Problems on Faces Navigation Diagram and Hyperthreading query

    Am I the only one having performance problems when dealing with Faces-Config Diagrams of about 35 JSPs displayed on the sheet. using Jdev 10.1.3 It's taking my workstation about a full minute and a half to update the name of an arrow. The most stressed component during this task seems to be the CPU.
    And just another question has anybody investigated how is the performance of Jdev affected by either enabling or disabling hyperthreading? In my case my CPU usage manages to reach only 50%. I'm tempted to switch HT off to let JDev use all the cpu power. if that would be the case.

    Hello Diego,
    you mentioned that you compared a BEx Query with the Web INtelligence report. Could you provide more details here ?
    - what are the elements in the rows, columns and free characterisitcs in the BEx Query ?
    - was the query execute as designed in the BEx Query Designer with BEx Web Reporting ?
    - what are the elements in the WebIntelligence Query panel ?
    thanks
    Ingo

  • HD content between MacBook and iMac

    I searched around the forums and I couldn't find anything that answered this exact question. Its a bit of a tough one im sure, so here goes.
    Recently I purchased a new 13" MacBook Pro and used the automatic data transfer to set up the machine, so it has the the exact same programs as the iMac that I purchased a year ago or so. Now, I purchased a few HD TV shows on the macbook to test a way to transfer the shows between computers. I know how to do the iPod transfer method (moving content to iPod and then switching machines and checking for purchased content) But that method only worked for the standard def version of the show.
    The Question: How do I get the HD version to move from macbook to imac and vice versa? Is there a method or will the standard just have to do? This is not urgent, Im just curious.

    The HD version is not synced to your iPod as it cannot be played there.
    You'll have to connect your two Macs together or use an external HD or flash drive to transfer the HD version.
    See this: How to use FireWire target disk mode, http://support.apple.com/kb/HT1661

  • Performance problem with Integration with COGNOS and Bex

    Hi Gems
    I have a performance problem with some of my queries when integrating with the COGNOS
    My query is simple which gets the data for the date interval : "
    From Date: 20070101
    To date:20070829
    When executing the query in the Bex it takes 2mins but when it is executed in the COGNOS it takes almost 10mins and above..
    Any where can we debug the report how the data is sending to the cognos. Like debugging the OLEDB ..
    and how to increase the performance.. of the query in the Cognos ..
    Thanks in Advance
    Regards
    AK

    Hi,
    Please check the following CA Unicenter config files on the SunMC server:
    - is the Event Adapter (ea-start) running ?, without these daemon no event forwarding is done the CA Unicenter nor discover from Ca unicenter is working.
    How to debug:
    - run ea-start in debug mode:
    # /opt/SUNWsymon/SunMC-TNG/sbin/ea-start -d9
    - check if the Event Adaptor is been setup,
    # /var/opt/SUNWsymon/SunMC-TNG/cfg_sunmctotng
    - check the CA log file
    # /var/opt/SUNWsymon/SunMC-TNG/SunMCToTngAdaptorMain.log
    After that is all fine check this side it explains how to discover an SunMC agent from CA Unicenter.
    http://docs.sun.com/app/docs/doc/817-1101/6mgrtmkao?a=view#tngtrouble-6
    Kind Regards

  • Transfering control between main and a thread

    Hi,
    How to transfer control from a thread which has been initiated from main, back to main.I have a variable and if the value of the var is true the control should go back to main after performing a task the value of the var is set false and the control should be passed to the thread.
    Please help me with a sample code.

    hullo
    control back and forth between threads????
    i'm sorry i dont know if i'm getting you right.
    unlike in method calls where the control returns back to calling method, threads are designed to execute independently.
    and if you spawn a thread from main method both main and spawned thread would be executing simultaneously and only JVM simulates the operating system's control switching between the created threads.
    we can however sleep, yield or wait on a resource.
    you can do all that you were mentioning in the spwaned thread. by writing it in the run method.
    skeleton code:
    public class Exp {
    public boolean choice;
    public exp(boolean choice) {
    this.choice = choice;
    SpawnedThread st = new SpawnedThread(choice);
    if(choice == true) {
    //perform whatever
    notify();
    public static void main(String ar[]) {
    new Exp(choice);
    class SpawnedThread extends Thread {
    boolean choice;
    public SpawnedThread (boolean choice) {
    this.choice = choice;
    start();
    public void run() {
    if(choice == true) {
    wait();
    // resume the task

  • Difference between main and other threads

    There are a lot many tings you cannot do n the main thread( the thread running the main Method) and other Threads how exactly do they differ?

    Why doesn't it allow running the static methods from
    main like dumpStack( )....You can run static methods from main. Do you mean the main thread or the main method?
    You can't call non-static methods directly from the main method, but that's just because it's static. You can't call non-static methods from any static method, without an instance on which to call them. Nothing special about main in that respect.

  • I need to migrate data from Lion to OSX do you know of any problems with content between the two systems?

    I'm using a MacBook AIR and OSX 10.9.1. When I upgraded that I made a mistake with the user name and so created another user. Now I understand I need to migrate the data between the two systems but that there might be trouble with some of the content files transferring across? Has anyone done this before? If so what were the pitfalls? Thanks and Help! No-one typing this feels as if they're on a steep learning curve at all!

    Hi nothpole
    Welcome to BlackBerry Support Forums
    Sorry to hear that you're having issues with your BlackBerry ! Before trying anything else can you try check a few setting or undo it on your device ?
    northpole wrote:
    I have memory cleaning scheduled to run every 1 hour.
    First advise is to disable Memory Cleaner ! you don't need that , It can cause freezing and clocking. To disable it , From your homescreen go to Options > Security > Advance Security setting > Memory Cleaner > Disable that feature .After disabling it reboot your device do it by removing the Battery while your device is Powered On then wait for at least a minute then reinsert it back, your device may take a long reboot . 
    Then monitor your device for the next few hrs. and check if you feel the same problem .
    let us know.
    Click " Like " if you want to Thank someone.
    If Problem Resolves mark the post(s) as " Solution ", so that other can make use of it.

  • Problem with values between 0 and +-0.49 URGENT

    I have a strange problem. I have an internal table:
    DATA : BEGIN OF xxx OCCURS 0,
           value(15),
           END OF xxx.
    If the xxx-value is between 0.01 and 0.49 the
    if xxx-value GE 0
    returns TRUE and the
    if xxx-value >  0
    returns FALSE.
    If the xxx-value is between -0.01 and -0.49 the
    if xxx-value GE 0
    returns TRUE and the
    if xxx-value >  0
    returns FALSE.
      Our SAP version is 4.0b. Does anyone know any solution to this very urgent and serious problem?
      Thanks in advance<b></b>

    hi,
    DATA : BEGIN OF xxx OCCURS 0,
           value type f,
           END OF xxx.
    also
    2nd condition
    below is not valid condition since it will be always
    less than 0.
    If the xxx-value is between -0.01 and -0.49 the
    if xxx-value GE 0
    returns TRUE and the
    if xxx-value >  0
    returns FALSE.

  • Improving performance in a merge between local and remote query

    If you try to merge two queries, one local (e.g. table in Excel) and one remote (table in SQL Server), the entire remote table is loaded in memory in order to apply the NestedJoin condition. This could be very slow. In my case, the goal is to import only
    those rows that have a product name listed in a local Excel table.
    I used the SelectRows by using the list of values of the local query (having only one column) in order to apply an "IN ('value1', 'value2', ...)" condition in the SQL statement generated by Power Query (see examples below).
    Questions:
    Is there another way to do that in "M"?
    Is there a way to build such a query (filter a table by using values obtained in another query) by using the user interface?
    Is this a scenario that could be better optimized in the future by improving query folding made by Power Query?
    Thanks for the feedback!
    Local Query
    let
        LocalQuery = Excel.CurrentWorkbook(){[Name="LocalTable"]}[Content]
    in
        LocalQuery
    Remote Query
    let
        Source = Sql.Databases("servername"),
        Database = Source{[Name="databasename"]}[Data],
        RemoteQuery = OasisDataMart{[Schema="schemaname",Item="tablename"]}[Data]
    in
        RemoteQuery
    Merge Query (from Power Query user interface)
    let
        Merge = Table.NestedJoin(LocalQuery,{"ProductName"},RemoteQuery,{"ProductName"},"NewColumn",JoinKind.Inner),
        #"Expand NewColumn" = Table.ExpandTableColumn(Merge, "NewColumn", {"Description", "Price"}, {"NewColumn.Description", "NewColumn.Price"})
    in
        #"Expand NewColumn"
    Alternative merge approach (editing M - is it possible in user interface?)
    let
        #"Filtered Rows" = Table.SelectRows(RemoteQuery, each List.Contains ( Table.ToList(LocalQuery), [ProductName] ))
    in
        #"Filtered Rows"
    Marco Russo (Blog,
    Twitter,
    LinkedIn) - sqlbi.com:
    Articles, Videos,
    Tools, Consultancy,
    Training
    Format with DAX Formatter and design with
    DAX Patterns. Learn
    Power Pivot and SSAS Tabular.

    Bingo! You've find a serious performance issue!
    The very same result can be produced in a fast or slow way.
    Slow technique: do the RemoveColumns before the SelectRows (maybe you don't have to apply any transformations to the table you want to filter before the SelectRows - I haven't tested this):
    let
        Source = Sql.Databases(".\k12"),
        AdventureWorksDW2012 = Source{[Name="AdventureWorksDW2012"]}[Data],
        dbo_FactInternetSales = AdventureWorksDW2012{[Schema="dbo",Item="FactInternetSales"]}[Data],
        #"Removed Columns" = Table.RemoveColumns(dbo_FactInternetSales,{"SalesOrderLineNumber", "RevisionNumber", "OrderQuantity", "UnitPrice", "ExtendedAmount", "UnitPriceDiscountPct", "DiscountAmount", "ProductStandardCost", "TotalProductCost",
    "SalesAmount", "TaxAmt", "Freight", "CarrierTrackingNumber", "CustomerPONumber", "OrderDate", "DueDate", "ShipDate", "DimCurrency", "DimCustomer", "DimDate(DueDateKey)", "DimDate(OrderDateKey)", "DimDate(ShipDateKey)", "DimProduct", "DimPromotion", "DimSalesTerritory",
    "FactInternetSalesReason"}),
        #"Filtered Rows" = Table.SelectRows(#"Removed Columns", each List.Contains(Selection[ProductKey],[ProductKey]))
    in
        #"Filtered Rows"
    Fast technique: do the RemoveColumns after the SelectRows
    let
        Source = Sql.Databases(".\k12"),
        AdventureWorksDW2012 = Source{[Name="AdventureWorksDW2012"]}[Data],
        dbo_FactInternetSales = AdventureWorksDW2012{[Schema="dbo",Item="FactInternetSales"]}[Data],
        #"Filtered Rows" = Table.SelectRows(dbo_FactInternetSales, each List.Contains(Selection[ProductKey],[ProductKey])),
        #"Removed Columns" = Table.RemoveColumns(#"Filtered Rows",{"SalesOrderLineNumber", "RevisionNumber", "OrderQuantity", "UnitPrice", "ExtendedAmount", "UnitPriceDiscountPct", "DiscountAmount", "ProductStandardCost", "TotalProductCost",
    "SalesAmount", "TaxAmt", "Freight", "CarrierTrackingNumber", "CustomerPONumber", "OrderDate", "DueDate", "ShipDate", "DimCurrency", "DimCustomer", "DimDate(DueDateKey)", "DimDate(OrderDateKey)", "DimDate(ShipDateKey)", "DimProduct", "DimPromotion", "DimSalesTerritory",
    "FactInternetSalesReason"})
    in
        #"Removed Columns"
    I think that Power Query team should take a look at this.
    Thanks!
    Marco Russo (Blog,
    Twitter,
    LinkedIn) - sqlbi.com:
    Articles, Videos,
    Tools, Consultancy,
    Training
    Format with DAX Formatter and design with
    DAX Patterns. Learn
    Power Pivot and SSAS Tabular.

Maybe you are looking for

  • Nokia X3 02 Changing Unlock style

    I have Nokia X3 02 and it unlocks with that sliding unlock.but now i want 2 change it with a unlock that looks like a button on touch screen.i have seen that in one Nokia X3 02 mobile and i am not getting how to change unlock style.guyz help

  • HTTP Receiver Adapter parameters

    Dear All I am trying to POST an HTTP URL in teh following format but the query string doesnt get populated up when I parameterise it. http://<hostname:80>/<path>?<query-string> e.g. http://hello:80/world?myname=haik&myapplication=XI&mycompany=SAP whe

  • Problems viewing Flash Player content in browsers (as consumer)

    Win XP Pro version 5.1.2600 Service Pack 3 Build 2600 IE 8.0.6001.18702 Firefox 3.6.27 I have insured I have the most current Flash Player installed.  I even uninstalled and reinstalled it.  I have confirmed my internet security is not blocking the c

  • Firefox keeps loading cached site

    Firefox keeps loading a cached version of a website, specifically businessinsider.com. I have tried deleting my cache via tools>clear recent history>everything. And of course i did make sure all of the boxes were checked. I also have deleted the cach

  • MPX 2.0

    Can somebody direct me to the upgrade guide from MPX 1.1 to 2.0? -J