Some questions & Performance probleme

Hello all,
I'm currently trying to make an application using berkeley DB.
But I came accross really Ugly performance (439 insert / sec) and I also try to switch to 'DB_QUEUE' but I've got a
'illegal record number size' when putting record into my db.
# time ./prog 10000
done in 22766.2 ms (439.248 updates/sec)
real 0m22.807s
user 0m0.320s
sys 0m0.580s
And when I call 'db->set_re_len(1024)' and change 'DB_TREE' to 'DB_QUEUE' (at line db->open),
I got this:
illegal record number size
terminate called after throwing an instance of 'DbException'
what(): Db::put: Invalid argument
Aborted (core dumped)
Could someone explain me what I'm doing wrong here ?
Thanks you very much for any help
Laurent
Here the program:
========================================================
#include <db_cxx.h>
#include <sys/time.h>
char buffer[1024];
#define get_timeval(timeval) gettimeofday(&(timeval), 0)
struct timeval start,end;
void
usage()
std::cerr << "You should give a number as ARG1 !\n";
exit(1);
int
main(int argc, char** argv)
if ( argc!=2 )
usage();
DbEnv* env;
Db* db;
DbSequence* seq;
Db* dbSeq;
Dbt seqKey;
Dbt key;
Dbt data;
env = new DbEnv(0);
env->set_flags( DB_AUTO_COMMIT , 1 );
env->set_flags( DB_TXN_WRITE_NOSYNC , 1 );
key.set_data( new db_seq_t );
key.set_size( sizeof(db_seq_t) );
data.set_data( new char[1024] );
data.set_size( 1024 );
u_int32_t flags = DB_CREATE
| DB_PRIVATE
| DB_INIT_LOCK
| DB_INIT_LOG
| DB_INIT_TXN
| DB_INIT_MPOOL;
env->open("/files/lauma/test/PONG",flags,0);
dbSeq = new Db(env,0);
dbSeq->open(0,"sequences.db",0,DB_BTREE,DB_CREATE,0640);
db = new Db( env , 0 );
db->set_pagesize(65536);
//db->set_re_len( 1024 );
db->open( 0 , "mytables.db" , NULL , DB_BTREE , DB_CREATE , 0640 );
seq = new DbSequence( dbSeq , 0 );
seqKey.set_data(const_cast<char*>("mytables.db"));
seqKey.set_size(strlen("mytables.db")+1);
seq->open(0 , &key , DB_CREATE );
int count = atoi(argv[1]);
get_timeval(start);
for(int i=0;i<count;++i)
if ( (i%1000) == 0 ) std::cout << i << " saves ...\n";
DbTxn* txn;
env->txn_begin(0,&txn,0);
db_seq_t dbid;
seq->get(txn,1,&dbid,0);
memmove( key.get_data() , &dbid , sizeof(db_seq_t) );
memmove( data.get_data() , buffer , 1024 );
db->put(txn,&key,&data,0);
txn->commit(0);
get_timeval(end);
double d = ((end.tv_sec*1000.0 + end.tv_usec/1000.0) - (start.tv_sec*1000.0 + start.tv_usec/1000.0));
std::cout << "done in " << d << " ms (" << (d?(count*1000.0/d):-1) << " updates/sec)" << std::endl;
seq->close(0);
dbSeq->close(0);
db->close(0);
env->close(0);
delete seq;
delete dbSeq;
delete db;
delete env;
}

Thanks for your answer.
But it's seems the probleme is elsewhere.
The probleme occurs when I use a 'db_seq_t' as the key value like in:
for( ... )
db_seq_t dbid = ... get sequence ...
Dbt key( &dbid , sizeof(db_seq_t) );
db->put(txn,&key,&data,0);
But if I do the following:
for( ... )
db_seq_t dbid = ... get sequence ...
char buffer[10];
sprintf( buffer , "%010lld" , dbid );
Dbt key( buffer , 10 );
db->put(txn,&key,&data,0);
The performance are ok !
Do you have any idea ??
Thank you very much
Laurent Marzullo
I paste the new code here:
=============================================
#include <db_cxx.h>
#include <sys/time.h>
char buffer[1024];
#define get_timeval(timeval) gettimeofday(&(timeval), 0)
struct timeval start,end;
void
usage()
std::cerr << "prog <count> <char | int>\n";
exit(1);
u_int32_t     pagesize = 32 * 1024;
u_int          bulkbufsize = 4 * 1024 * 1024;
u_int logbufsize = 8 * 1024 * 1024;
u_int cachesize = 32 * 1024 * 1024;
u_int          datasize = 255;
u_int          keysize = 8;
u_int numitems = 0;
int
main(int argc, char** argv)
if ( argc!=3 )
usage();
bool kchar = false;
if ( strcmp( argv[2] , "char" ) == 0 )
kchar = true;
else if ( strcmp( argv[2] , "int" ) == 0 )
kchar = false;
else
usage();
DbEnv* env;
Db* db;
DbSequence* seq;
Db* dbSeq;
Dbt seqKey;
Dbt keychar;
Dbt keyint;
Dbt data;
env = new DbEnv(0);
env->set_flags( DB_AUTO_COMMIT , 1 );
env->set_flags( DB_TXN_WRITE_NOSYNC , 1 );
env->set_lg_bsize( logbufsize );
env->log_set_config( DB_LOG_IN_MEMORY | DB_LOG_DSYNC , 0 );
keychar.set_flags( DB_DBT_USERMEM );
keyint.set_flags( DB_DBT_USERMEM );
data.set_flags( DB_DBT_USERMEM );
data.set_data( new char[datasize] );
data.set_size( datasize );
data.set_ulen( datasize );
u_int32_t flags = DB_CREATE
| DB_PRIVATE
| DB_INIT_LOCK
| DB_INIT_LOG
| DB_INIT_TXN
| DB_INIT_MPOOL;
//env->log_set_config( DB_LOG_IN_MEMORY , 1 );
//env->set_cache_max(0,8388608);
env->set_cachesize(1,8388608,1);
env->open("/files/lauma/test/PONG",flags,0);
dbSeq = new Db(env,0);
dbSeq->open(0,"sequences.db",0,DB_BTREE,DB_CREATE,0640);
db = new Db( env , 0 );
db->set_pagesize(65536);
//db->set_re_len( 1024 );
db->open( 0 , "mytables.db" , NULL , DB_BTREE , DB_CREATE , 0666 );
seq = new DbSequence( dbSeq , 0 );
seqKey.set_data(const_cast<char*>("mytables.db"));
seqKey.set_size(strlen("mytables.db")+1);
seq->open(0 , &seqKey , DB_CREATE );
int count = atoi(argv[1]);
get_timeval(start);
for(int i=0;i<count;++i)
if ( (i%1000) == 0 ) std::cout << i << " saves ...\n";
DbTxn* txn;
env->txn_begin(0,&txn,0);
db_seq_t dbid;
seq->get(txn,1,&dbid,0);
if ( kchar )
char buffer[10];
sprintf( buffer , "%010lld" , dbid );
keychar.set_data( buffer );
keychar.set_size( 10 );
keychar.set_ulen( 10 );
else
keyint.set_data(&dbid);
keyint.set_size(sizeof(db_seq_t));
keyint.set_ulen(sizeof(db_seq_t));
data.set_data( buffer );
data.set_size( datasize );
memset( buffer , 'A' , datasize );
buffer[1023] = '\0';
if ( kchar )
db->put(txn,&keychar,&data,0);
else
db->put(txn,&keyint,&data,0);
txn->commit(0);
get_timeval(end);
double d = ((end.tv_sec*1000.0 + end.tv_usec/1000.0) - (start.tv_sec*1000.0 + start.tv_usec/1000.0));
std::cout << "done in " << d << " ms (" << (d?(count*1000.0/d):-1) << " updates/sec)" << std::endl;
seq->close(0);
dbSeq->close(0);
db->close(0);
env->close(0);
delete seq;
delete dbSeq;
delete db;
delete env;
}

Similar Messages

  • Some question and problems on FSB:Dram

    Hi to all, i have a simple question fo you (i hope that's simple :D )
    I have a p4 3.2 GHz like you can see here:
    FSB is 200Mhz and it's ok
    But i have 4 pc3200 ram modules in dual channel mode, i think that their frequency must be 200Mhz but it's only 160, and i can't activate pat, like yuo can see here:
    Now my question is: Why?
    Thank to all

    Quote
    Originally posted by NovJoe
    Simple, as displayed in the second picture, you've set your DRAM Frequency to DDR 333MHz which will be detected as 320MHz for FSb 800MHz CPU on DDR 400MHz RAMs. Due to this, CPU-Z will think that you are running in 5:4 ratio instead of 1:1 ratio. To obtain 1:1 ratio, you must set your DRAM Frequency to 400MHz in BIOS and make sure that DDR Clock is reading it at 400MHz also before saving to CMOS. Check with CPU-Z again and you'll see that it'll display 200MHz(RAMs are double rated where you've to multiply the rates by 2) and ratio at 1:1.
    Isn't so easy...
    My actual configuration:
    Dynamic Overclocking: Private
    Performance Mode: Slow (If i put fast or greater... KABOOM :D)
    CPU Ratio: Locked
    DRAM Frequency: 400Mhz
    Spread Spectrum: Disabled
    Adjust CPU Bus Clock(Mhz): 200
    DDR Clock: 333 (Adjust AGP/PCI Clock (Mhz): 66.66/33.33
    CPU Vcore Adjust: NO
    CPU Vcore: 1.5500V
    DDR Power Voltage: 2.65V
    AGP Power Voltage: 1.55V
    I can't understend why DDR Clock is 333 :(

  • Yosemite Mail performance problems

    I upgraded to OS 10.10 Yosemite this morning. When I opened the new Mail application, I noticed some significant performance problems.
    I organize all my emails by conversation (i.e., they display by thread), and I like to expand those conversations on occasion. However, whenever I choose the menu command to expand all conversations (View-->Expand All Conversations), there is a delay between 15-30 seconds for the conversations to expand.
    Switching between mailboxes incurs a 2-3 second delay before the mailbox I'm switching to appears in the viewer pane.
    Flagging and unflagging messages incurs a 10-15 second delay before the message is flagged or unflagged.
    I tried the usual things to fix problems with Mail. All of my mailboxes have been rebuilt, and I also manually re-indexed my mailboxes. I'm running a MacBook Air with a  1.7 GHz Intel i7 processor with 4 GB of RAM. All my mailboxes are run on a Microsoft Exchange Server (which I realize might be the whole or part of the problem, though I didn't have any issues with any other OSX versions before).
    So, my questions: is there a general performance problem with Mail in Yosemite? And if there is, will Apple release a fix?

    Please follow these directions to delete the Mail "sandbox" folder.
    Back up all data.
    Triple-click anywhere in the line below on this page to select it:
    ~/Library/Containers/com.apple.mail
    Right-click or control-click the highlighted line and select
              Services ▹ Reveal
    from the contextual menu.* A Finder window should open with a folder named "com.apple.mail" selected. If it does, move the selected folder—not just its contents—to the Desktop. Leave the Finder window open for now.
    Restart the computer. Launch Mail and test. If the problem is resolved, you may have to recreate some of your Mail settings. Any custom stationery that you created may be lost. Ask for instructions if you want to preserve that data. You can then delete the folder you moved and close the Finder window.
    Caution: If you change any of the contents of the sandbox, but leave the folder itself in place, Mail may crash or not launch at all. Deleting the whole sandbox will cause it to be rebuilt automatically.
    *If you don't see the contextual menu item, copy the selected text to the Clipboard by pressing the key combination  command-C. In the Finder, select
              Go ▹ Go to Folder...
    from the menu bar and paste into the box that opens by pressing command-V. You won't see what you pasted because a line break is included. Press return.

  • Performance problems with Leopard 10.5.1

    Hello,
    I use an iMac 24 Alu 2,8Ghz and upgraded to Leopard. There are some major performance problems and bugs in the recent version of Leopard:
    1. While accessing USB devices, the display speed, windows moving, animations etc. slow down
    2. Adobe CS3 Photoshop 10.01 and Flash CS3 are sometimes extremly slow. I tried the recent demo packages from Adobe:
    2.1. The Photoshop dialogue "save for web" slows down the system completly, and this problem stays when quitting Photoshop. A restart is neccessary then.
    2.2. Flash CS3 movie preview is very slow and stuttering. It's so slow you cannot imagine how the real movie flow will be.
    2.3. Recent Flash Player 9,0,115,0 with hardware acceleration enabled doesn't really work with QuartzGL enabled Leopard: The movies slow down a lot. Try www.neave.tv for example.
    3. Safari, Mail and other bundled software hang sometimes. You have to force quit them then. It doesn't matter whether QuartzGL is enabled or not. This especially happens to my system if it is online for some hours.
    4. A lot of Apple applications doesn't seem to work with 2dextreme enabled. Why this? Apple supporters told me in Leopard there will be a much better 2dextreme support. Also Quartz2dExtreme in OSX 10.4 worked with all applications and i guess it's the same feature like "QuartzGL" in Leopard. So Leopard isn't finished here. It would be nice if Apple could make it's own software QuartzGL compatible.
    5. Very often the desktop slows down or lags. This is the main reason I often still witch to Windows XP PC to do work in a faster, less annoying way.
    6. Safari crashes randomly sometimes. It is unstable still. Also it crashes more often if you resize/move the window a lot, so I guess it is a graphics extension-related problem.
    I hope you people from Apple will fix these annoying points and optimize your new system in the next update release.
    Best regards

    Thanks for you answer. I repaired in the way as described above. There were some errors, some file index was wrong (don't remember exactly the phrase), now the DU reports the partition was successfully repaired / the volume appears to be ok.
    The crashes in Safari are gone, but all other described problems still exist. Adobe CS3 is not really usable for me.
    By the way, in iMacSoftwareUpdate 1.3, which was replaced by OSX 10.5.1 update, there is one extension called AppleVADriver.kext that does not exist in the OSX 10.5.1 update. Is it an important extension?

  • Performance problems when running PostgreSQL on ZFS and tomcat

    Hi all,
    I need help with some analysis and problem solution related to the below case.
    The long story:
    I'm running into some massive performance problems on two 8-way HP ProLiant DL385 G5 severs with 14 GB ram and a ZFS storage pool in raidz configuration. The servers are running Solaris 10 x86 10/09.
    The configuration between the two is pretty much the same and the problem therefore seems generic for the setup.
    Within a non-global zone I’m running a tomcat application (an institutional repository) connecting via localhost to a Postgresql database (the OS provided version). The processor load is typically not very high as seen below:
    NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU                            
        49 postgres  749M  669M   4,7%   7:14:38  13%
         1 jboss    2519M 2536M    18%  50:36:40 5,9%We are not 100% sure why we run into performance problems, but when it happens we experience that the application slows down and swaps out (according to below). When it settles everything seems to turn back to normal. When the problem is acute the application is totally unresponsive.
    NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU
        1 jboss    3104M  913M   6,4%   0:22:48 0,1%
    #sar -g 5 5
    SunOS vbn-back 5.10 Generic_142901-03 i86pc    05/28/2010
    07:49:08  pgout/s ppgout/s pgfree/s pgscan/s %ufs_ipf
    07:49:13    27.67   316.01   318.58 14854.15     0.00
    07:49:18    61.58   664.75   668.51 43377.43     0.00
    07:49:23   122.02  1214.09  1222.22 32618.65     0.00
    07:49:28   121.19  1052.28  1065.94  5000.59     0.00
    07:49:33    54.37   572.82   583.33  2553.77     0.00
    Average     77.34   763.71   771.43 19680.67     0.00Making more memory available to tomcat seemed to worsen the problem or at least didn’t prove to have any positive effect.
    My suspicion is currently focused on PostgreSQL. Turning off fsync boosted performance and made the problem less often to appear.
    An unofficial performance evaluation on the database with “vacuum analyze” took 19 minutes on the server and only 1 minute on a desktop pc. This is horrific when taking the hardware into consideration.
    The short story:
    I’m trying different steps but running out of ideas. We’ve read that the database block size and file system block size should match. PostgreSQL is 8 Kb and ZFS is 128 Kb. I didn’t find much information on the matter so if any can help please recommend how to make this change…
    Any other recommendations and ideas we could follow? We know from other installations that the above setup runs without a single problem on Linux on much smaller hardware without specific tuning. What makes Solaris in this configuration so darn slow?
    Any help appreciated and I will try to provide additional information on request if needed…
    Thanks in advance,
    Kasper

    raidz isnt a good match for databases. Databases tend to require good write performance for which mirroring works better.
    Adding a pair of SSD's as a ZIL would probably also help, but chances are its not an option for you..
    You can change the record size by "zfs set recordsize=8k <dataset>"
    It will only take effect for newly written data. Not existing data.

  • Performance problems with File Adapter and XI freeze

    Hi NetWeaver XI geeks,
    We are deploying a XI based product and encounter some huge performance problems. Here after the scenario and the issues:
    - NetWeaver XI 2004
    - SAP 4.6c
    - Outbound Channel
    - No mapping used and only the iDocs Adapter is involved in the pipeline processing
    - File Adapter
    - message file size < 2Ko
    We have zeroed down the problem to Idoc adapter’s performance.
    We are using a file channel and  every 15 seconds a file in a valid Idoc format is placed in a folder, Idoc adapter picks up the file from this folder and sends it  to the SAP R/3 instance.
    For few minutes (approx 5 mins) it works (the CPU usage is less then 20% even if processing time seems huge : <b>5sec/msg</b>) but after this time the application gets blocked and the CPU gets overloaded at 100% (2 processes disp_worker.exe at 50% each).
    If we inject several files in the source folder at the same time or if we decrease the time gap (from 15 seconds to 10 seconds) between creation of 2 Idoc files , the process blocks after posting  2-3 docs to SAP R/3.
    Could you point us some reasons that could provoke that behavior?
    Basically looking for some help in improving performance of the Idoc adapter.
    Thanks in advance for your help and regards,
    Adalbert

    Hi Bhavesh,
    Thanks for your suggestions. We will test...
    We wonder if the hardware is not the problem of this extremely poor performance.
    Our XI server is:
    •     Windows 2003 Server
    •     Processors: 2x3GHZ
    •     RAM: 4GB (the memory do not soak)
    The messages are well formed iDocs = single line INVOICES.
    Some posts are talking 2000 messages processed in some seconds... whereas we got 5 sec per message.
    Tnanks for your help.
    Adalbert

  • Performance problems when using Premiere Elements for photo slideshows

    Hello,
    I had been using Premiere Elements 9 (PE9) to make a simple slideshow for my parents from their vacation trip and I ran into some serious performance problems.  I had used it to create similar projects before, but not nearly as big.  This one is like 260 photos, so basically it is 260 seperate clips.  I have a POWERHOUSE workstation (see below) so it isn't my PC.  Even when PE9 crashes, looking at my performance monitor my CPU and RAM aren't even halfway being utilized.  I finally switched to Windows Movie Maker of all things and it worked seemlessly, amazing really.  I'm wondering if I was just using PE9 for something other than what it was designed for since there weren't really any video clips, just a ton of photos that I made into video clips, if that makes sense.  Based upon my experience with this so far, I can't imagine using PE9 anymore for anything really.  I might have the need for a more professional video editing program in the near future, although it does seem like PE has a lot of features.  How can I make sure it utilizes my workstation to its full potential?  Here are my specs:
    PC
    Intel Core i7-2600K 4.6 GHz Overclocked
    ASUS P8P67 Deluxe Motherboard
    AMD Firepro V8800 Video Card
    Crucial 128 GB SATA 6Gb/s Solid State Drive (Operating System)
    Corsair Vengeance 16GB (4x4GB) Memory
    Corsair H60 Liquid CPU Cooler
    Corsair Professional Series Gold AX850 Power Supply
    Graphite Series 600T Mid-Tower Case
    Western Digital Caviar Black 1 TB SATA III Hard Drive
    Western Digital Caviar Black 2 TB SATA III Hard Drive
    Western Digital Green 3 TB SATA III Hard Drive
    Logitech Wireless Gaming Mouse G700
    I don’t play any games but it’s a great productivity mouse with 13 customizable buttons
    Wacom Intuos5 Pen Tablet
    Yes, this system is blazingly fast.  I have yet to feel it slow down, even with Photoshop, Lightroom, InDesign, Illustrator and numerous other apps running at the same time.  HOWEVER, Premiere Elements 9 has crashed NUMERUOS times, every time my system wasn't even close to being fully taxed. 
    Monitors – All run on the ATI V8800
    Dell Ultra Sharp 30 inch
    Samsung 27 Inch
    HAANS-G 28 Inch
    Herman Miller Embody Ergonomic Chair (one of my favorite items)

    Andy,
    There ARE some differences between PrE and PrPro w/ an approved CUDA-capable and MPE hardware acceleration-enabled nVidia video card, but those differences show up ONLY in the quality of the Scaling. The processing overhead is almost exactly the same, when it comes to handling the extra pixels.
    As of PrPro CS 5, two things changed:
    The max. size of Still Images went up from 4096 x 4096 pixels, to quite a bit larger (cannot recall the numbers now).
    The Scaling algorithms have been improved, though ONLY with the correct nVidia cards, with MPE hardware support enabled.
    Now, there CAN be another consideration, between the two programs, in that PrPro CS 5 - CS 6, are 64-bit ONLY, so one benefits from the computer and OS to run it. PrE can be either 32-bit, or 64-bit, so one might, or might not, be taking advantage of the 64-bit program and OS. Still, the processing overhead will be almost identical, it's just that the 64-bit OS can spread it around a bit.
    I still recommend Scaling the large Still Images in PS, prior to Import, to keep that processing overhead as low as is possible. Scaled Still Images work just fine, and I have one Project with 3000+ Scaled Still Images, that edits just fine in PrPro, even on my older 32-bit workstation. Testing that same machine, and PrPro some years ago, I could ONLY work with up to 5 - 4096 x 4096 Stills, before things ground to a crawl.
    Now, Adobe AfterEffects handles large Still Images differently, so I just moved that test Project to AE, and added another 20 large Images, which edited just fine. IIRC, AE can handle Still Images up to 10K x 10K pixels, and that might have gone up, as of CS 5.
    Good luck, and hope that helps,
    Hunt

  • Oracle performance problem -

    We are facing some critical performance problems with one of the
    tables. I have a table with the schema given below.When there
    are around 2 lakh entries in this table, if I give a select count
    (*) , it takes around 20 seconds to return the count. Where as
    any other table in the same database, returns the count in less
    than 2 seconds for around 1 lakh entries. I am not able to
    figure out where the performance bottle neck is.
    Configuration:
    Oracle 8.0.5 on Windows NT 4.0
    USER_NAME NOT NULL VARCHAR2(30)
    TASK_ID NUMBER // Unique
    TASK_NAME NOT NULL VARCHAR2(100)
    TERMINAL_NAME VARCHAR2(30)
    TASK_DATE DATE
    TASK_STATUS NOT NULL NUMBER
    PARENT_ID NUMBER
    NE_NAME NOT NULL VARCHAR2(30)
    ENM_JOBID VARCHAR2(30)
    CMD_CONTENT VARCHAR2(4000)
    CMD_RESPONSE VARCHAR2(4000)
    RESPONSE_TIME DATE
    TASK_TYPE NUMBER
    SCHEDULE_TIME DATE
    All the tables are present in the same table space. So
    I guess it could not be because of any disk access differences
    between tables. Have you guys faced any such problems ? Could
    this be because of the huge CMD_RESPONSE and CMD_CONTENT
    fields ?

    Thanks a lot.
    I would like to know , how I can optimise the performance when
    having longer records. Will increasing the block size help in
    getting better performance ? If so,is there any other
    disadvantage in increasing the block size.
    My actual requirement is to read all the records in the
    table. So, a static cursor is created on the server and I do a
    block fetch(SQLFetchScrolll). I use ODBC for this purpose. The
    first fetch takes a lot of time, around 50 seconds or so for 2
    lakh records. Any ideas how to optimise the same ???
    Best Regards,
    Vignesh

  • Does anyone know about Master Collection CS6 performance problems on OSX Mountain Lion?

    I have a 15" MBP (late 2011) and I'm having some poor performance problems even after a clear install, reindexing the system and fixing all permission errors in Disk Utility. I have some suspicions about compatibility problems with Adobe Master Collection CS6 cause those are the programms I always work with and in wich I have experienced a bad performance.
    Hardware test says everything is fine. Any sugestion?

    Guys, I suffered the same problem for 4 days.  I am on the new MBP Pro Retina Display + Mountain Lion 16GB Ram, etc...
    I first thought it was to do with retina display however no, that is not the case.
    SOLUTION (sounds odd but works):
    The only way I found to fix it, is setting photoshop to use the East Asian text engine instead of the Middle East one. (Prefs -> Type -> East Asian). Then restart Photoshop.

  • Hi team , I have some questions/Problem for my product apple (iPad,iPhone) , I want to employee speak and type thai language

    Hi team , I have some questions/Problem for my product apple (iPad,) , I want to employee that can  speak  or response me in thai language
    1. ผม อาคเนย์  พำนักอยู่ประเทศไทย กรุงเทพฯ  มีปัญหาสอบถาม ดังต่อไปนี้
       - กระผมได้ทำการตัดบัตร เครดิต เพื่อซื้อเกมส์ผ่าน itune store ผ่าน apple itune ID : misskor.yaprom@*** เพื่อซื้อเกมส์ Eleves Realm ในวันที่18 ก.ค. 56 เวลา 17.07น. ซึ่งทางบัตรเครดิตได้แจ้งเรียกเก็บเงินมายอดเงิน 39.99$ ซึ่งในระบบจริงๆ ทางกระผมต้องการตัดในยอด 99.99$ แต่พอได้ประสานงานไปยังธนาคาร ได้รับการแจ้งกลับมาว่า ได้ทำการตัดบัตรในยอดเงิน 39.99$ ซึ่งในความเป็นจริงนั้น กระผมไม่ได้สั่งซื้อเกมส์ในยอด 39.99$ ซึ่งในยอด 99.99$ นั้นพยายามตัดในระบบบัตรเครดิตอยู่ แต่ทางกระผมได้ยืนยันกลับไปว่าไม่ให้ระบบตัดนะครับ เพราะว่าเนื่องจากมีปัญหาในการชำระเงินระหว่าง Apple itune store อยู่
       - ทั้งนี้ขอให้ทางเจ้าหน้าที่ประสานงานตรวจสอบ apple itune ID : misskor.yaprom@*** เพื่อซื้อเกมส์ Eleves Realm ตามที่ได้ให้รายละเอียดโดยด่วนว่าเป็นเพราะว่าระบบมีปัญหาหรือว่ามีอะไรเกิดขึ้นในข ั้นตอนการชำระเงินครับ
    รบกวนประสานงานกลับมายังกระผม อาคเนย์ ที่หมายเลขโทรศัพท์มือถือ +**** / reply feedback  email : lekod1@*** โดยด่วน ในวันศุกร์ที่ 19 ก.ค. 2556 ครับ
    ขอบคุณครับ
    อาคเนย์  อุดปิน
    กด
    <Edited By Host>

    Google translation:
    พนักงานของ iTunes Store จะไม่ได้อ่านข้อความในเว็บบอร์ดนี้ ถ้าคุณต้องการความช่วยเหลือสำหรับปัญหาที่มีใน iTunes Store, คุณจะต้องติดต่อกับพวกเขาผ่านทางแบบฟอร์มเว็บนี้:
    http://www.apple.com/emea/support/itunes/contact.html

  • LR 4.4 (and 5.0?) catalog: a problem and some questions

    Introductory Remark
    After several years of reluctance this March I changed to LR due to its retouching capabilities. Unfortunately – beyond enjoying some really nice features of LR – I keep struggling with several problems, many of which have been covered in this forum. In this thread I describe a problem with a particular LR 4.4 catalog and put some general questions.
    A few days ago I upgraded to 5.0. Unfortunately it turned out to produce even slower ’speed’ than 4.4 (discussed – among other places – here: http://forums.adobe.com/message/5454410#5454410), so I rather fell back to the latter, instead of testing the behavior of the 5.0 catalog. Anyway, as far as I understand this upgrade does not include significant new catalog functions, so my problem and questions below may be valid for 5.0, too. Nevertheless, the incompatibility of the new and previous catalogs suggests rewriting of the catalog-related parts of the code. I do not know the resulting potential improvements and/or new bugs in 5.0.
    For your information, my PC (running under Windows 7) has a 64-bit Intel Core i7-3770K processor, 16GB RAM, 240 GB SSD, as well as fast and large-capacity HDDs. My monitor has a resolution of 1920x1200.
    1. Problem with the catalog
    To tell you the truth, I do not understand the potential necessity for using the “File / Optimize Catalog” function. In my view LR should keep the catalog optimized without manual intervention.
    Nevertheless, when being faced with the ill-famed slowness of LR, I run this module. In addition, I always switch on the “Catalog Settings / General / Back up catalog” function. The actually set frequency of backing up depends on the circumstances – e.g. the number of RAW (in my case: NEF) files, the size of the catalog file (*.lrcat), and the space available on my SSD. In case of need I delete the oldest backup file to make space for the new one.
    Recently I processed 1500 photos, occupying 21 GB. The "Catalog Settings / Metadata / Automatically write changes into XMP" function was switched on. Unfortunately I had to fiddle with the images quite a lot, so after processing roughly half of them the catalog file reached the size of 24 GB. Until this stage there had been no sign of any failure – catalog optimizations had run smoothly and backups had been created regularly, as scheduled.
    Once, however, towards the end of generating the next backup, LR sent an error message saying that it had not been able to create the backup file, due to lack of enough space on the SSD. I myself found still 40 GB of empty space, so I re-launched the backup process. The result was the same, but this time I saw a mysterious new (journal?) file with a size of 40 GB… When my third attempt also failed, I had to decide what to do.
    Since I needed at least the XMP files with the results of my retouching operations, I simply wanted to save these side-cars into the directory of my original input NEF files on a HDD. Before making this step, I intended to check whether all modifications and adjustments had been stored in the XMP files.
    Unfortunately I was not aware of the realistic size of side-cars, associated with a certain volume of usage of the Spot Removal, Grad Filter, and Adjustment Brush functions. But as the time of the last modification of the XMP files (belonging to the recently retouched pictures) seemed perfect, I believed that all my actions had been saved. Although the "Automatically write changes into XMP" seemed to be working, in order to be on the safe side I selected all photos and ran the “Metadata / Save Metadata to File” function of the Library module. After this I copied the XMP files, deleted the corrupted catalog, created a new catalog, and imported the same NEF files together with the side-cars.
    When checking the photos, I was shocked: Only the first few hundred XMP files retained all my modifications. Roughly 3 weeks of work was completely lost… From that time on I regularly check the XMP files.
    Question 1: Have you collected any similar experience?
    2. The catalog-related part of my workflow
    Unless I miss an important piece of knowledge, LR catalogs store many data that I do not need in the long run. Having the history of recent retouching activities is useful for me only for a short while, so archiving every little step for a long time with a huge amount of accumulated data would be impossible (and useless) on my SSD. In terms of processing what count for me are the resulting XMP files, so in the long run I keep only them and get rid of the catalog.
    Out of the 240 GB of my SSD 110 GB is available for LR. Whenever I have new photos to retouch, I make the following steps:
    create a ‘temporary’ catalog on my SSD
    import the new pictures from my HDD into this temporary catalog
    select all imported pictures in the temporary catalog
    use the “File / Export as Catalog” function in order to copy the original NEF files onto the SSD and make them used by the ‘real’ (not temporary) new catalog
    use the “File / Open Catalog” function to re-launch LR with the new catalog
    switch on the "Automatically write changes into XMP" function of the new catalog
    delete the ‘temporary’ catalog to save space on the SSD
    retouch the pictures (while keeping and eye on due creation and development of the XMP files)
    generate the required output (TIF OR JPG) files
    copy the XMP and the output files into the original directory of the input NEF files on the HDD
    copy the whole catalog for interim archiving onto the HDD
    delete the catalog from the SSD
    upon making sure that the XMP files are all fine, delete the archived catalog from the HDD, too
    Question 2: If we put aside the issue of keeping the catalog for other purposes then saving each and every retouching steps (which I address below), is there any simpler workflow to produce only the XMP files and save space on the SSD? For example, is it possible to create a new catalog on the SSD with copying the input NEF files into its directory and re-launching LR ‘automatically’, in one step?
    Question 3: If this I not the case, is there any third-party application that would ease the execution of the relevant parts of this workflow before and/or after the actual retouching of the pictures?
    Question 4: Is it possible to set general parameters for new catalogs? In my experience most settings of the new catalogs (at least the ones that are important for me) are copied from the recently used catalog, except the use of the "Catalog Settings / Metadata / Automatically write changes into XMP" function. This means that I always have to go there to switch it on… Not even a question is raised by LR whether I want to change anything in comparison with the settings of the recently used catalog…
    3. Catalog functions missing from my workflow
    Unfortunately the above described abandoning of catalogs has at least two serious drawbacks:
    I miss the classification features (rating, keywords, collections, etc.) Anyway, these functions would be really meaningful for me only if covering all my existing photos that would require going back to 41k images to classify them. In addition, keeping all the pictures in one catalog would result in an extremely large catalog file, almost surely guaranteeing regular failures. Beyond, due to the speed problem tolerable conditions could be established only by keeping the original NEF files on the SSD, which is out of the question. Generating several ‘partial’ catalogs could somewhat circumvent this trap, but it would require presorting the photos (e.g. by capture time or subject) and by doing this I would lose the essence of having a single catalog, covering all my photos.
    Question 5: Is it the right assumption that storing only some parts (e.g. the classification-related data) of catalog files is impossible? My understanding is that either I keep the whole catalog file (with the outdated historical data of all my ‘ancient’ actions) or abandon it.
    Question 6: If such ‘cherry-picking’ is facilitated after all: Can you suggest any pragmatic description of the potential (competing) ways of categorizing images efficiently, comparing them along the pros and contras?
    I also lose the virtual copies. Anyway, I am confused regarding the actual storage of the retouching-related data of virtual copies. In some websites one can find relatively old posts, stating that the XMP file contains all information about modifying/adjusting both the original photo and its virtual copy/copies. However, when fiddling with a virtual copy I cannot see any change in the size of the associated XMP file. In addition, when I copy the original NEF file and its XMP file, rename them, and import these derivative files, only the retouched original image comes up – I cannot see any virtual copy. This suggests that the XMP file does not contain information on the virtual copy/copies…
    For this reason whenever multiple versions seem to be reasonable, I create renamed version(s) of the same NEF+XMP files, import them, and make some changes in their settings. I know, this is far not a sophisticated solution…
    Question 7: Where and how the settings of virtual copies are stored?
    Question 8: Is it possible to generate separate XMP files for both the originally retouched image and its virtual copy/copies and to make them recognized by LR when importing them into a new catalog?

    A part of my problems may be caused by selecting LR for a challenging private project, where image retouching activities result in bigger than average volume of adjustment data. Consequently, the catalog file becomes huge and vulnerable.
    While I understand that something has gone wrong for you, causing Lightroom to be slow and unstable, I think you are combining many unrelated ideas into a single concept, and winding up with a mistaken idea. Just because you project is challenging does not mean Lightroom is unsuitable. A bigger than average volume of adjustment data will make the catalog larger (I don't know about "huge"), but I doubt bigger by itself will make the catalog "vulnerable".
    The causes of instability and crashes may have NOTHING to do with catalog size. Of course, the cause MAY have everything to do with catalog size. I just don't think you are coming to the right conclusion, as in my experience size of catalog and stability issues are unrelated.
    2. I may be wrong, but in my experience the size of the RAW file may significantly blow up the amount of retouching-related data.
    Your experience is your experience, and my experience is different. I want to state clearly that you can have pretty big RAW files that have different content and not require significant amounts of retouching. It's not the size of the RAW that determines the amount of touchup, it is the content and the eye of the user. Furthermore, item 2 was related to image size, and now you have changed the meaning of number 2 from image size to the amount of retouching required. So, what is your point? Lots of retouching blows up the amount of retouching data that needs to be stored? Yeah, I agree.
    When creating the catalog for the 1500 NEF files (21 GB), the starting size of the catalog file was around 1 GB. This must have included all classification-related information (the meaningful part of which was practically nothing, since I had not used rating, classification, or collections). By the time of the crash half of the files had been processed, so the actual retouching-related data (that should have been converted properly into the XMP files) might be only around 500 MB. Consequently, probably 22.5 GB out of the 24 GB of the catalog file contained historical information
    I don't know exactly what you do to touch up your photos, I can't imagine how you come up with the size should be around 500MB. But again, to you this problem is entirely caused by the size of the catalog, and I don't think it is. Now, having said that, some of your problem with slowness may indeed be related to the amount of touch-up that you are doing. Lightroom is known to slow down if you do lots of spot removal and lots of brushing, and then you may be better off doing this type of touch-up in Photoshop. Again, just to be 100% clear, the problem is not "size of catalog", the problem is you are doing so many adjustments on a single photo. You could have a catalog that is just as large, (i.e. that has lots more photos with few adjustments) and I would expect it to run a lot faster than what you are experiencing.
    So to sum up, you seem to be implying that slowness and catalog instability are the same issue, and I don't buy it. You seem to be implying that slowness and instability are both caused by the size of the catalog, and I don't buy that either.
    Re-reading your original post, you are putting the backups on the SSD, the same disk as the working catalog? This is a very poor practice, you need to put your backups on a different physical disk. That alone might help your space issues on the SSD.

  • Question for ORACLE EXPERTS!!! PERFORMANCE PROBLEM!!!

    I have 2 nodes on RAC. On node1 the following query run in 15 seconds and on node2 the same query run in 40 minutes. This is a big problem because we are migrating a SQL Server database to Oracle RAC 10gR2 (10.2.0.2) on HP-UX Itanium and we can´t finish the project because of this performance problem!!! PLEASE HELP ME ORACLE GURUS !!! I have opened a TAR at Metalink a 3 month ago,but they can´t help me with this. Can anyone explain this event to me please??? " gc cr multi block request 92011 0.33 737.49
    This is the query that i run on both nodes and this is the tkprof of the trace file when i run this on second node:
    select /*+ NO_PARALLEL(t) */ sum(t.saldo_em_real)
    from
    brcapdb2.titulo t
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 13.76 790.26 0 107219 0 0
    total 3 13.76 790.26 0 107219 0 0
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 31
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net message to client 2 0.00 0.00
    SQL*Net message from client 2 0.02 0.02
    gc cr multi block request 92011 0.33 737.49
    gc current grant busy 1 0.00 0.00
    latch: KCL gc element parent latch 4 0.00 0.00
    latch: object queue header operation 17 0.00 0.01
    latch free 6 0.00 0.00
    gc remaster 9 1.94 9.00
    gcs drm freeze in enter server mode 19 1.97 30.52
    gc current block 2-way 10 0.61 2.72
    SQL*Net break/reset to client 1 0.01 0.01
    Please help me if you can!!
    Tks,
    Paulo.

    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net message to client 2 0.00 0.00
    SQL*Net message from client 2 0.02 0.02
    gc cr multi block request 92011 0.33 737.49
    gc current grant busy 1 0.00 0.00
    latch: KCL gc element parent latch 4 0.00 0.00
    latch: object queue header operation 17 0.00 0.01
    latch free 6 0.00 0.00
    gc remaster 9 1.94 9.00
    gcs drm freeze in enter server mode 19 1.97 30.52
    gc current block 2-way 10 0.61 2.72
    SQL*Net break/reset to client 1 0.01 0.01There's lots of global cache traffic going on. All those multiblock transfers seem to be saturating the interconnect. Some things to consider:
    - Have a look at metalink note 3951017.8 for a possible fix
    - Check that your interconnect is properly configured
    - Try running the query with PARALLEL hint. This would mean doing disk I/O instead of buffer & interconnect I/O.
    edit: changed "without NOPARALLEL" to "with PARALLEL"
    Message was edited by:
    antti.koskinen

  • HT5312 I did all the steps, but my problem is that they are asking me some questions that I did not select when I created my Apple ID, so I clicked where it said reset security questions info... and then when I went to my email, they didn't sended me anyt

    I did all the steps, but my problem is that they are asking me some questions that I did not selected when I created my Apple ID, so I clicked where it said reset security quiestion info...... And then when I went and cheked my email, they did not send anything to me!
    This problem started 1 month ago!
    Ivana1728

    The email response is automated, and almost certainly is detected as "junk" or "spam". So, try again and check your spam folders for their reply.

  • I have some questions regarding setting up a software RAID 0 on a Mac Pro

    I have some questions regarding setting up a software RAID 0 on a Mac pro (early 2009).
    These questions might seem stupid to many of you, but, as my last, in fact my one and only, computer before the Mac Pro was a IICX/4/80 running System 7.5, I am a complete novice regarding this particular matter.
    A few days ago I installed a WD3000HLFS VelociRaptor 300GB in bay 1, and moved the original 640GB HD to bay 2. I now have 2 bootable internal drives, and currently I am using the VR300 as my startup disk. Instead of cloning from the original drive, I have reinstalled the Mac OS, and all my applications & software onto the VR300. Everything is backed up onto a WD SE II 2TB external drive, using Time Machine. The original 640GB has an eDrive partition, which was created some time ago using TechTool Pro 5.
    The system will be used primarily for photo editing, digital imaging, and to produce colour prints up to A2 size. Some of the image files, from scanned imports of film negatives & transparencies, will be 40MB or larger. Next year I hope to buy a high resolution full frame digital SLR, which will also generate large files.
    Currently I am using Apple's bundled iPhoto, Aperture 2, Photoshop Elements 8, Silverfast Ai, ColorMunki Photo, EZcolor and other applications/software. I will also be using Photoshop CS5, when it becomes available, and I will probably change over to Lightroom 3, which is currently in Beta, because I have had problems with Aperture, which, until recent upgrades (HD, RAM & graphics card) to my system, would not even load images for print. All I had was a blank preview page, and a constant, frozen "loading" message - the symbol underneath remained static, instead of revolving!
    It is now possible to print images from within Aperture 2, but I am not happy with the colour fidelity, whereas it is possible to produce excellent, natural colour prints using its "minnow" sibling, iPhoto!
    My intention is to buy another 3 VR300s to form a 4 drive Raid 0 array for optimum performance, and to store the original 640GB drive as an emergency bootable back-up. I would have ordered the additional VR300s already, but for the fact that there appears to have been a run on them, and currently they are out of stock at all, but the more expensive, UK resellers.
    I should be most grateful to receive advice regarding the following questions:
    QUESTION 1:
    I have had a look at the RAID setting up facility in Disk Utility and it states: "To create a RAID set, drag disks or partitions into the list below".
    If I install another 3 VR300s, can I drag all 4 of them into the "list below" box, without any risk of losing everything I have already installed on the existing VR300?
    Or would I have to reinstall the OS, applications and software again?
    I mention this, because one of the applications, Personal accountz, has a label on its CD wallet stating that the Licence Key can only be used once, and I have already used it when I installed it on the existing VR300.
    QUESTION 2:
    I understand that the failure of just one drive will result in all the data in a Raid 0 array being lost.
    Does this mean that I would not be able to boot up from the 4 drive array in that scenario?
    Even so, it would be worth the risk to gain the optimum performance provide by Raid 0 over the other RAID setup options, and, in addition to the SE II, I will probably back up all my image files onto a portable drive as an additional precaution.
    QUESTION 3:
    Is it possible to create an eDrive partition, using TechTool Pro 5, on the VR300 in bay !?
    Or would this not be of any use anyway, in the event of a single drive failure?
    QUESTION 4:
    Would there be a significant increase in performance using a 4 x VR300 drive RAID 0 array, compared to only 2 or 3 drives?
    QUESTION 5:
    If I used a 3 x VR300 RAID 0 array, and installed either a cloned VR300 or the original 640GB HD in bay 4, and I left the Startup Disk in System Preferences unlocked, would the system boot up automatically from the 4th. drive in the event of a single drive failure in the 3 drive RAID 0 array which had been selected for startup?
    Apologies if these seem stupid questions, but I am trying to determine the best option without foregoing optimum performance.

    Well said.
    Steps to set up RAID
    Setting up a RAID array in Mac OS X is part of the installation process. This procedure assumes that you have already installed Mac OS 10.1 and the hard drive subsystem (two hard drives and a PCI controller card, for example) that RAID will be implemented on. Follow these steps:
    1. Open Disk Utility (/Applications/Utilities).
    2. When the disks appear in the pane on the left, select the disks you wish to be in the array and drag them to the disk panel.
    3. Choose Stripe or Mirror from the RAID Scheme pop-up menu.
    4. Name the RAID set.
    5. Choose a volume format. The size of the array will be automatically determined based on what you selected.
    6. Click Create.
    Recovering from a hard drive failure on a mirrored array
    1. Open Disk Utility in (/Applications/Utilities).
    2. Click the RAID tab. If an issue has occurred, a dialog box will appear that describes it.
    3. If an issue with the disk is indicated, click Rebuild.
    4. If Rebuild does not work, shut down the computer and replace the damaged hard disk.
    5. Repeat steps 1 and 2.
    6. Drag the icon of the new disk on top of that of the removed disk.
    7. Click Rebuild.
    http://support.apple.com/kb/HT2559
    Drive A + B = VOLUME ONE
    Drive C + D = VOLUME TWO
    What you put on those volumes is of course up to you and easy to do.
    A system really only needs to be backed up "as needed" like before you add or update or install anything.
    /Users can be backed up hourly, daily, weekly schedule
    Media files as needed.
    Things that hurt performance:
    Page outs
    Spotlight - disable this for boot drive and 'scratch'
    SCRATCH: Temporary space; erased between projects and steps.
    http://en.wikipedia.org/wiki/StandardRAIDlevels
    (normally I'd link to Wikipedia but I can't load right now)
    Disk drives are the slowest component, so tackling that has always made sense. Easy way to make a difference. More RAM only if it will be of value and used. Same with more/faster processors, or graphic card.
    To help understand and configure your 2009 Nehalem Mac Pro:
    http://arstechnica.com/apple/reviews/2009/04/266ghz-8-core-mac-pro-review.ars/1
    http://macperformanceguide.com/
    http://www.macgurus.com/guides/storageaccelguide.php
    http://www.macintouch.com/readerreports/harddrives/index.html
    http://macperformanceguide.com/OptimizingPhotoshop-Configuration.html
    http://kb2.adobe.com/cps/404/kb404440.html

  • Interactive report performance problem over database link - Oracle Gateway

    Hello all;
    This is regarding a thread Interactive report performance problem over database link that was posted by Samo.
    The issue that I am facing is when I use Oracle function like (apex_item.check_box) the query slow down by 45 seconds.
    query like this: (due to sensitivity issue, I can not disclose real table name)
    SELECT apex_item.checkbox(1,b.col3)
    , a.col1
    , a.col2
    FROM table_one a
    , table_two b
    WHERE a.col3 = 12345
    AND a.col4 = 100
    AND b.col5 = a.col5
    table_one and table_two are remote tables (non-oracle) which are connected using Oracle Gateway.
    Now if I run above queries without apex_item.checkbox function the query return or response is less than a second but if I have apex_item.checkbox then the query run more than 30 seconds. I have resolved the issues by creating a collection but it’s not a good practice.
    I would like to get ideas from people how to resolve or speed-up the query?
    Any idea how to use sub-factoring for the above scenario? Or others method (creating view or materialized view are not an option).
    Thank you.
    Shaun S.

    Hi Shaun
    Okay, I have a million questions (could you tell me if both tables are from the same remote source, it looks like they're possibly not?), but let's just try some things first.
    By now you should understand the idea of what I termed 'sub-factoring' in a previous post. This is to do with using the WITH blah AS (SELECT... syntax. Now in most circumstances this 'materialises' the results of the inner select statement. This means that we 'get' the results then do something with them afterwards. It's a handy trick when dealing with remote sites as sometimes you want the remote database to do the work. The reason that I ask you to use the MATERIALIZE hint for testing is just to force this, in 99.99% of cases this can be removed later. Using the WITH statement is also handled differently to inline view like SELECT * FROM (SELECT... but the same result can be mimicked with a NO_MERGE hint.
    Looking at your case I would be interested to see what the explain plan and results would be for something like the following two statements (sorry - you're going have to check them, it's late!)
    WITH a AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_one),
    b AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_two),
    sourceqry AS
    (SELECT  b.col3 x
           , a.col1 y
           , a.col2 z
    FROM table_one a
        , table_two b
    WHERE a.col3 = 12345
    AND   a.col4 = 100
    AND   b.col5 = a.col5)
    SELECT apex_item.checkbox(1,x), y , z
    FROM sourceqry
    WITH a AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_one),
    b AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_two)
    SELECT  apex_item.checkbox(1,x), y , z
    FROM table_one a
        , table_two b
    WHERE a.col3 = 12345
    AND   a.col4 = 100
    AND   b.col5 = a.col5If the remote tables are at the same site, then you should have the same results. If they aren't you should get the same results but different to the original query.
    We aren't being told the real cardinality of the inners select here so the explain plan is distorted (this is normal for queries on remote and especially non-oracle sites). This hinders tuning normally but I don't think this is your problem at all. How many distinct values do you normally get of the column aliased 'x' and how many rows are normally returned in total? Also how are you testing response times, in APEX, SQL Developer, Toad SQLplus etc?
    Sorry for all the questions but it helps to answer the question, if I can.
    Cheers
    Ben
    http://www.munkyben.wordpress.com
    Don't forget to mark replies helpful or correct ;)

Maybe you are looking for