Performance problem with Integration with COGNOS and Bex

Hi Gems
I have a performance problem with some of my queries when integrating with the COGNOS
My query is simple which gets the data for the date interval : "
From Date: 20070101
To date:20070829
When executing the query in the Bex it takes 2mins but when it is executed in the COGNOS it takes almost 10mins and above..
Any where can we debug the report how the data is sending to the cognos. Like debugging the OLEDB ..
and how to increase the performance.. of the query in the Cognos ..
Thanks in Advance
Regards
AK

Hi,
Please check the following CA Unicenter config files on the SunMC server:
- is the Event Adapter (ea-start) running ?, without these daemon no event forwarding is done the CA Unicenter nor discover from Ca unicenter is working.
How to debug:
- run ea-start in debug mode:
# /opt/SUNWsymon/SunMC-TNG/sbin/ea-start -d9
- check if the Event Adaptor is been setup,
# /var/opt/SUNWsymon/SunMC-TNG/cfg_sunmctotng
- check the CA log file
# /var/opt/SUNWsymon/SunMC-TNG/SunMCToTngAdaptorMain.log
After that is all fine check this side it explains how to discover an SunMC agent from CA Unicenter.
http://docs.sun.com/app/docs/doc/817-1101/6mgrtmkao?a=view#tngtrouble-6
Kind Regards

Similar Messages

  • Performance Problems on Faces Navigation Diagram and Hyperthreading query

    Am I the only one having performance problems when dealing with Faces-Config Diagrams of about 35 JSPs displayed on the sheet. using Jdev 10.1.3 It's taking my workstation about a full minute and a half to update the name of an arrow. The most stressed component during this task seems to be the CPU.
    And just another question has anybody investigated how is the performance of Jdev affected by either enabling or disabling hyperthreading? In my case my CPU usage manages to reach only 50%. I'm tempted to switch HT off to let JDev use all the cpu power. if that would be the case.

    Hello Diego,
    you mentioned that you compared a BEx Query with the Web INtelligence report. Could you provide more details here ?
    - what are the elements in the rows, columns and free characterisitcs in the BEx Query ?
    - was the query execute as designed in the BEx Query Designer with BEx Web Reporting ?
    - what are the elements in the WebIntelligence Query panel ?
    thanks
    Ingo

  • Problems with dynamic data slices and BEx Analyzer

    Hello experts,
    we use a data slice to lock released data.
    This data slice contains an exit-variable which selects all plan that are already released out of a ABAP dictionary table.
    The user can release data via planning function. He can start this function with a button in a BEx workbook.
    He use the same workbook for insert planning data.
    the problem is:
    when a user releases data, the value of the exit variable in our data slice changes but the data slice 9999 seems to be NOT generated with the new value. the reuslt is, that the user can still change released data.
    the data slice is just new generated after logout and login again not after refresh the query.
    Is there a possibility to force the system to generate data slice 9999 with the actual value of the exit variable? maybe a function module or method?
    We have this problem just in BEx Analyzer. In Web Reporting everything works fine and the data is locked by the data slice after refresh.
    Thank you
    Johannes

    Hallo Johannes,
    if you want to have this kind of behaviour you need to go for a data slice of type exit. You will find some information about this in the Planning forum.
    The data slice is instantiated once, but when you release the data you need to call a method in your exit data slice which rereads the values. This is a rough idea how this could be done.
    Regards Matthias Nutt
    SAP Consulting Switzerland

  • Performance problem between Oracle.DataAccess v1 and v2

    Hi, I have serious performance problem with OracleDataReader when I use the GetValues method.
    My server is Oracle 9.2.0.7, and i use ODAC v10.2.0.221
    I create a dummy table for benchmark :
    create table test (a varchar2(50), b number)
    begin
    for i in 1..62359 loop
    insert into test values ('Values ' || i, i);
    end loop;
    commit;
    end;
    I use the same code for benchmark Framework v1 and Framework v2.
    Code :
    try {
    OracleConnection c = new OracleConnection("user id=saturne_dbo;password=***;data source=satedfx;");
    c.Open();
    go(c);
    c.Close();
    catch (Exception ex) {
    MessageBox.Show(ex.Message);
    private void go(IDbConnection c) {
    IDbCommand cmd = c.CreateCommand();
    cmd.CommandText = "select * from test";
    cmd.CommandType = CommandType.Text;
    DateTime dt = DateTime.Now;
    IDataReader reader = cmd.ExecuteReader();
    int count = 0;
    while (reader.Read()) {
    object[] fields = new object[reader.FieldCount];
    reader.GetValues(fields);
    count++;
    reader.Close();
    TimeSpan eps = DateTime.Now - dt;
    MessageBox.Show("Time " + count + " : " + eps.TotalSeconds);
    Result are :
    Framework v1 with OracleDataAccess 1.10.2.2.20 "Time 62359 : 0.5"
    Framework v2 with OracleDataAccess 2.10.2.2.20 "Time 62359 : 3.57" FACTOR 6 !!!!!
    I notice same problem with oleDb provider and Microsoft Oracle Client provider..
    It's a serious problem for my production server, the time calculation explode...
    Where is the explication ?
    Do u know solution ?

    Can you please try out following -
    1. Create a .NET 1.x DLL with your benchmark code. This will obviously use ODP.NET for .NET 1.x.
    2. Call this assembly routine from a .NET 1.x executable and note the results.
    3. Now call this assembly routine from a .NET 2.0 executable and note the results.
    The idea is to always use "ODP.NET for .NET 1.x" even in .NET 2.0 runtime. This will tell us whether the performance degradation is a runtime issue.

  • Query performance problem - events 2505-read cache and 2510-write cache

    Hi,
    I am experiencing severe performance problems with a query, specifically with events 2505 (Read Cache) and 2510 (Write Cache) which went up to 11000 seconds on some executions. Data Manager (400 s), OLAP data selection (90 s) and OLAP user exit (250 s) are other the other event with noticeable times. All other events are very quick.
    The query settings (RSRT) are
    persistent cache across each app server -> cluster table,
    update cache in delta process is checked ->group on infoprovider type
    use cache despite virtual characteristics/key figs checked (one info-cube has1 virtual key figure which should have a static result for a day)
    =>Do you know how I can get more details than what's in 0TCT_C02 to break down the read and write cache events time or do you have any recommandation?
    I have checked and no dataloads were in progres on the info-providers and no master data loads (change run). Overall system performance was acceptable for other queries.
    Thanks

    Hi,
    Looks like you're using BDB, not BDB JE, and this is the BDB JE forum. Could you please repost here?:
    Berkeley DB
    Thanks,
    mark

  • Problem on integrating forms 6i and reports 6i on web

    Hai,
    I am integrating forms 6i and reports 6i on web.if one user access the reports then no problem.If multiple users access the reports on same time the the server is goslow down,some error (memory related) is also coming.Please any body can explain about this problem?.
    And tell me i have to use concurrent manager or without concurrent i can solve the problem?.
    Thank you....
    with regards,
    Thivan.A.S.

    is your application server big enough for the application you host? Did you test it before going production with a stress test? There are several good white papers from oracle in the forms-section on the OTN.
    it sounds, that you run out of memory when starting some reports. Because reports sometimes (depends on the report) uses much more memory than a form you can temporarily run out of memory.
    check the memory usage on the server if possible, when running such reports

  • Performance problem on view with spatial column - resolved

    I have had a problem with queries on a view that had a spatial column, where the view did not belong to the logged in user. Where my spatial window was retrieved by a sub-query, the spatial scan did a full table scan instead of using the spatial index.
    I have found that the problem can be resolved by granting MERGE VIEW on the view to the querying user.
    The view can be as simple as SELECT * FROM table.
    The badly performing query could be as simple as
    select id from T1.tstview
    where SDO_RELATE(coordinates,
    (SELECT coordinates FROM T1.tstWINDOW WHERE ID = '1')
    ,'mask=INSIDE+COVEREDBY querytype=WINDOW') = 'TRUE'  ;
    I think this is a bug, and have raised an SR - MERGE VIEW is supposed to override issues with the "security intent" of a view.
    The workaround is simple enough once you're aware of it and I thought it was worth passing on.

    Thanks for sharing this workaround!
    Which ORACLE version did you test ?

  • Performance problems in games with Sound Blaster USB Sound Dev

    Hello!
    I recently bought a new USB sound device named "Creative Sound Blaster Surround 5. USB Sound Device"?or maybe also "Sound Blaster Li've! 24-bit External" as this is the name that Windows is displaying.
    Before, i was using a C-Media PCI Audio Device.
    Other system info:
    Windows Vista SP
    ATI Radeon 9200 LE (i know it's old, but i was able to play the games before)
    2GB RAM
    Now, my performance in games decreased significantly.
    I only play Team Fortress 2, but i also tried an old GoldSrc engine?based game which was also slower.
    The game now isn't playing fluently like before: my fps dropped and every few seconds, the graphics stop as there is a big network lag. It is now impossible to really play the game.
    i already tried
    - reinstalling steam and tf2
    - using another usb port
    - reinstalling the sound drivers
    - all (both sound and graphics) settings to low
    The Steam support thinks that the data transfer rate for USB isn't high enough for what is needed and recommends that i should buy an internal sound card.
    But i hope there is a different solution to this problem and someone here can help.
    Thanks!

    No PCI interface, but USB = Slow. No hardware-acceleration (Mayber your CMedia had !real!, not pseudo?DS hardware support) so its normal that the soundblaster usb is slower in every term. But maybe sounds better...
    The USB versions are only for Notebooks and other speciall situations where a PCI soundcard with X-FI or Audigy chip cannot be used...

  • Performance problems related to Timesheet entry and Time Admin processing.

    Implementing 9.0, they are in UAT, experiencing performance delays on timeadmin and timesheet page when using apply rules button, they have quite a bit of rules and when number of users increase to 30 concurrent users severe performance issue are experienced on timesheet, at this point they are more concerned with the timesheet performance than the time admin performance, they have delayed their go live date until this issue gets resolved.
    In the Performance Monitor data were are getting several failed status' for the PMU 'JOLT Request' and PMU Details 'ICPanel'. In the additional data area it states:
    Error Status Code:
    Jolt ServiceException: Jolt Errno 100 JoltException.TPEJOLT
    PeopleSoft 9.0
    Weblogic 9.2
    Database:SQL Server 5 SP3
    Windows Server 2003 SP2

    Have you tried raising a SR on oracle support?
    Also, Timesheet performance is a known issue and there are multiple such issues reported on metalink. You can look at the issues for potential solutions!
    https://support.oracle.com/CSP/main/article?cmd=show&id=659033.1&type=NOT
    https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=PROBLEM&id=857761.1
    https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=PROBLEM&id=961924.1

  • BDB read performance problem: lock contention between GC and VM threads

    Problem: BDB read performance is really bad when the size of the BDB crosses 20GB. Once the database crosses 20GB or near there, it takes more than one hour to read/delete/add 200K keys.
    After a point, of these 200K keys there are about 15-30K keys that are new and this number eventually should come down and there should not be any new keys after a point.
    Application:
    Transactional Data Store application. Single threaded process, that's trying to read one key's data, delete the data and add new data. The keys are really small (20 bytes) and the data is large (grows from 1KB to 100KB)
    On on machine, I have a total of 3 processes running with each process accessing its own BDB on a separate RAID1+0 drive. So, according to me there should really be no disk i/o wait that's slowing down the reads.
    After a point (past 20GB), There are about 4-5 million keys in my BDB and the data associated with each key could be anywhere between 1KB to 100KB. Eventually every key will have 100KB data associated with it.
    Hardware:
    16 core Intel Xeon, 96GB of RAM, 8 drive, running 2.6.18-194.26.1.0.1.el5 #1 SMP x86_64 x86_64 x86_64 GNU/Linux
    BDB config: BTREE
    bdb version: 4.8.30
    bdb cache size: 4GB
    bdb page size: experimented with 8KB, 64KB.
    3 processes, each process accesses its own BDB on a separate RAIDed(1+0) drive.
    envConfig.setAllowCreate(true);
    envConfig.setTxnNoSync(ourConfig.asynchronous);
    envConfig.setThreaded(true);
    envConfig.setInitializeLocking(true);
    envConfig.setLockDetectMode(LockDetectMode.DEFAULT);
    When writing to BDB: (Asynchrounous transactions)
    TransactionConfig tc = new TransactionConfig();
    tc.setNoSync(true);
    When reading from BDB (Allow reading from Uncommitted pages):
    CursorConfig cc = new CursorConfig();
    cc.setReadUncommitted(true);
    BDB stats: BDB size 49GB
    $ db_stat -m
    3GB 928MB Total cache size
    1 Number of caches
    1 Maximum number of caches
    3GB 928MB Pool individual cache size
    0 Maximum memory-mapped file size
    0 Maximum open file descriptors
    0 Maximum sequential buffer writes
    0 Sleep after writing maximum sequential buffers
    0 Requested pages mapped into the process' address space
    2127M Requested pages found in the cache (97%)
    57M Requested pages not found in the cache (57565917)
    6371509 Pages created in the cache
    57M Pages read into the cache (57565917)
    75M Pages written from the cache to the backing file (75763673)
    60M Clean pages forced from the cache (60775446)
    2661382 Dirty pages forced from the cache
    0 Dirty pages written by trickle-sync thread
    500593 Current total page count
    500593 Current clean page count
    0 Current dirty page count
    524287 Number of hash buckets used for page location
    4096 Assumed page size used
    2248M Total number of times hash chains searched for a page (2248788999)
    9 The longest hash chain searched for a page
    2669M Total number of hash chain entries checked for page (2669310818)
    0 The number of hash bucket locks that required waiting (0%)
    0 The maximum number of times any hash bucket lock was waited for (0%)
    0 The number of region locks that required waiting (0%)
    0 The number of buffers frozen
    0 The number of buffers thawed
    0 The number of frozen buffers freed
    63M The number of page allocations (63937431)
    181M The number of hash buckets examined during allocations (181211477)
    16 The maximum number of hash buckets examined for an allocation
    63M The number of pages examined during allocations (63436828)
    1 The max number of pages examined for an allocation
    0 Threads waited on page I/O
    0 The number of times a sync is interrupted
    Pool File: lastPoints
    8192 Page size
    0 Requested pages mapped into the process' address space
    2127M Requested pages found in the cache (97%)
    57M Requested pages not found in the cache (57565917)
    6371509 Pages created in the cache
    57M Pages read into the cache (57565917)
    75M Pages written from the cache to the backing file (75763673)
    $ db_stat -l
    0x40988 Log magic number
    16 Log version number
    31KB 256B Log record cache size
    0 Log file mode
    10Mb Current log file size
    856M Records entered into the log (856697337)
    941GB 371MB 67KB 112B Log bytes written
    2GB 262MB 998KB 478B Log bytes written since last checkpoint
    31M Total log file I/O writes (31624157)
    31M Total log file I/O writes due to overflow (31527047)
    97136 Total log file flushes
    686 Total log file I/O reads
    96414 Current log file number
    4482953 Current log file offset
    96414 On-disk log file number
    4482862 On-disk log file offset
    1 Maximum commits in a log flush
    1 Minimum commits in a log flush
    160KB Log region size
    195 The number of region locks that required waiting (0%)
    $ db_stat -c
    7 Last allocated locker ID
    0x7fffffff Current maximum unused locker ID
    9 Number of lock modes
    2000 Maximum number of locks possible
    2000 Maximum number of lockers possible
    2000 Maximum number of lock objects possible
    160 Number of lock object partitions
    0 Number of current locks
    1218 Maximum number of locks at any one time
    5 Maximum number of locks in any one bucket
    0 Maximum number of locks stolen by for an empty partition
    0 Maximum number of locks stolen for any one partition
    0 Number of current lockers
    8 Maximum number of lockers at any one time
    0 Number of current lock objects
    1218 Maximum number of lock objects at any one time
    5 Maximum number of lock objects in any one bucket
    0 Maximum number of objects stolen by for an empty partition
    0 Maximum number of objects stolen for any one partition
    400M Total number of locks requested (400062331)
    400M Total number of locks released (400062331)
    0 Total number of locks upgraded
    1 Total number of locks downgraded
    0 Lock requests not available due to conflicts, for which we waited
    0 Lock requests not available due to conflicts, for which we did not wait
    0 Number of deadlocks
    0 Lock timeout value
    0 Number of locks that have timed out
    0 Transaction timeout value
    0 Number of transactions that have timed out
    1MB 544KB The size of the lock region
    0 The number of partition locks that required waiting (0%)
    0 The maximum number of times any partition lock was waited for (0%)
    0 The number of object queue operations that required waiting (0%)
    0 The number of locker allocations that required waiting (0%)
    0 The number of region locks that required waiting (0%)
    5 Maximum hash bucket length
    $ db_stat -CA
    Default locking region information:
    7 Last allocated locker ID
    0x7fffffff Current maximum unused locker ID
    9 Number of lock modes
    2000 Maximum number of locks possible
    2000 Maximum number of lockers possible
    2000 Maximum number of lock objects possible
    160 Number of lock object partitions
    0 Number of current locks
    1218 Maximum number of locks at any one time
    5 Maximum number of locks in any one bucket
    0 Maximum number of locks stolen by for an empty partition
    0 Maximum number of locks stolen for any one partition
    0 Number of current lockers
    8 Maximum number of lockers at any one time
    0 Number of current lock objects
    1218 Maximum number of lock objects at any one time
    5 Maximum number of lock objects in any one bucket
    0 Maximum number of objects stolen by for an empty partition
    0 Maximum number of objects stolen for any one partition
    400M Total number of locks requested (400062331)
    400M Total number of locks released (400062331)
    0 Total number of locks upgraded
    1 Total number of locks downgraded
    0 Lock requests not available due to conflicts, for which we waited
    0 Lock requests not available due to conflicts, for which we did not wait
    0 Number of deadlocks
    0 Lock timeout value
    0 Number of locks that have timed out
    0 Transaction timeout value
    0 Number of transactions that have timed out
    1MB 544KB The size of the lock region
    0 The number of partition locks that required waiting (0%)
    0 The maximum number of times any partition lock was waited for (0%)
    0 The number of object queue operations that required waiting (0%)
    0 The number of locker allocations that required waiting (0%)
    0 The number of region locks that required waiting (0%)
    5 Maximum hash bucket length
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Lock REGINFO information:
    Lock Region type
    5 Region ID
    __db.005 Region name
    0x2accda678000 Region address
    0x2accda678138 Region primary address
    0 Region maximum allocation
    0 Region allocated
    Region allocations: 6006 allocations, 0 failures, 0 frees, 1 longest
    Allocations by power-of-two sizes:
    1KB 6002
    2KB 0
    4KB 0
    8KB 0
    16KB 1
    32KB 0
    64KB 2
    128KB 0
    256KB 1
    512KB 0
    1024KB 0
    REGION_JOIN_OK Region flags
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Lock region parameters:
    524317 Lock region region mutex [0/9 0% 5091/47054587432128]
    2053 locker table size
    2053 object table size
    944 obj_off
    226120 locker_off
    0 need_dd
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Lock conflict matrix:
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Locks grouped by lockers:
    Locker Mode Count Status ----------------- Object ---------------
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Locks grouped by object:
    Locker Mode Count Status ----------------- Object ---------------
    Diagnosis:
    I'm seeing way to much lock contention on the Java Garbage Collector threads and also the VM thread when I strace my java process and I don't understand the behavior.
    We are spending more than 95% of the time trying to acquire locks and I don't know what these locks are. Any info here would help.
    Earlier I thought the overflow pages were the problem as 100KB data size was exceeding all overflow page limits. So, I implemented duplicate keys concept by chunking of my data to fit to overflow page limits.
    Now I don't see any overflow pages in my system but I still see bad bdb read performance.
    $ strace -c -f -p 5642 --->(607 times the lock timed out, errors)
    Process 5642 attached with 45 threads - interrupt to quit
    % time     seconds  usecs/call     calls    errors syscall
    98.19    7.670403        2257      3398       607 futex
     0.84    0.065886           8      8423           pread
     0.69    0.053980        4498        12           fdatasync
     0.22    0.017094           5      3778           pwrite
     0.05    0.004107           5       808           sched_yield
     0.00    0.000120          10        12           read
     0.00    0.000110           9        12           open
     0.00    0.000089           7        12           close
     0.00    0.000025           0      1431           clock_gettime
     0.00    0.000000           0        46           write
     0.00    0.000000           0         1         1 stat
     0.00    0.000000           0        12           lseek
     0.00    0.000000           0        26           mmap
     0.00    0.000000           0        88           mprotect
     0.00    0.000000           0        24           fcntl
    100.00    7.811814                 18083       608 total
    The above stats show that there is too much time spent locking (futex calls) and I don't understand that because
    the application is really single-threaded. I have turned on asynchronous transactions so the writes might be
    flushed asynchronously in the background but spending that much time locking and timing out seems wrong.
    So, there is possibly something I'm not setting or something weird with the way JVM is behaving on my box.
    I grep-ed for futex calls in one of my strace log snippet and I see that there is a VM thread that grabbed the mutex
    maximum number(223) of times and followed by Garbage Collector threads: the following is the lock counts and thread-pids
    within the process:
    These are the 10 GC threads (each thread has grabbed lock on an avg 85 times):
      86 [8538]
      85 [8539]
      91 [8540]
      91 [8541]
      92 [8542]
      87 [8543]
      90 [8544]
      96 [8545]
      87 [8546]
      97 [8547]
      96 [8548]
      91 [8549]
      91 [8550]
      80 [8552]
    VM Periodic Task Thread" prio=10 tid=0x00002aaaf4065000 nid=0x2180 waiting on condition (Main problem??)
     223 [8576] ==> grabbing a lock 223 times -- not sure why this is happening…
    "pool-2-thread-1" prio=10 tid=0x00002aaaf44b7000 nid=0x21c8 runnable [0x0000000042aa8000] -- main worker thread
       34 [8648] (main thread grabs futex only 34 times when compared to all the other threads)
    The load average seems ok; though my system thinks it has very less memory left and that
    I think is because its using up a lot of memory for the file system cache?
    top - 23:52:00 up 6 days, 8:41, 1 user, load average: 3.28, 3.40, 3.44
    Tasks: 229 total, 1 running, 228 sleeping, 0 stopped, 0 zombie
    Cpu(s): 3.2%us, 0.9%sy, 0.0%ni, 87.5%id, 8.3%wa, 0.0%hi, 0.1%si, 0.0%st
    Mem: 98999820k total, 98745988k used, 253832k free, 530372k buffers
    Swap: 18481144k total, 1304k used, 18479840k free, 89854800k cached
    PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
    8424 rchitta 16 0 7053m 6.2g 4.4g S 18.3 6.5 401:01.88 java
    8422 rchitta 15 0 7011m 6.1g 4.4g S 14.6 6.5 528:06.92 java
    8423 rchitta 15 0 6989m 6.1g 4.4g S 5.7 6.5 615:28.21 java
    $ java -version
    java version "1.6.0_21"
    Java(TM) SE Runtime Environment (build 1.6.0_21-b06)
    Java HotSpot(TM) 64-Bit Server VM (build 17.0-b16, mixed mode)
    Maybe I should make my application a Concurrent Data Store app as there is really only one thread doing the writes and reads. But I would like
    to understand why my process is spending so much time in locking.
    Can I try any other options? How do I prevent such heavy locking from happening? Has anyone seen this kind of behavior? Maybe this is
    all normal. I'm pretty new to using BDB.
    If there is a way to disable locking that would also work as there is only one thread that's really doing all the job.
    Should I disable the file system cache? One thing is that my application does not utilize cache very well as once I visit a key, I don't visit that
    key again for a very long time so its very possible that the key has to be read again from the disk.
    It is possible that I'm thinking this completely wrong and focussing too much on locking behavior and the problem is else where.
    Any thoughts/suggestions etc are welcome. Your help on this is much appreciated.
    Thanks,
    Rama

    Hi,
    Looks like you're using BDB, not BDB JE, and this is the BDB JE forum. Could you please repost here?:
    Berkeley DB
    Thanks,
    mark

  • Problem in Integrating Oracle Forms10g and Graphics 6i

    Hi , everybody..
    I have Installed Oracle Developer Suite 10g and Oracle Graphics 6i on the same machine , in seperate directories , of course.
    I have done all the necessary modifications of the file default.env - concerning this type of integration.
    The problem is that when I press a button which calls the graphic , the system runs the Oracle Graphics Batch successfully and it is supposed to insert the graphic in a chart item in a form - but it doesn't-, the message 'FRM-41211 Integration error: SSL failure running another product' appears.
    I have tried all the possible combinations of parameters of RUN_PRODUCT built-in procedure ('synchronous/asynchronous' , 'batch/runtime') but the same problem insists!!!
    What may be the problem?
    Sould I install the JRE of Developer 6i in the same ORACLE path i installed the Oracle Graphics 6i , or simply the JDK of it?
    Simon...

    Simon,
    a search on meta link unveiled a couple of notes and bugs that all deal with this message. My recommendation is to contact customer support for a proper analysiz of the problem.
    Frank

  • Crystal Reports access to SAP/CRM 6.0 with Integration with SAP Solutions

    Hello,
    we are running Crystal Reports 2008 with SAP CRM 6.0.
    To boost productivity or Report writeing we need especially access to:
    - Function Modules
    - the CRM Business Objects Repository (Transaction SW01).
    What kind of SAP/CRM  ( or SAP / ERP )  Objects can be accessed with the Integration for
    SAP Solutions ?
    The BO Documentation  [BusinessObjects XI Integration for SAP Solutions User's Guide|http://help.sap.com/businessobject/product_guides/boexir31SP2/en/xi31_sp2_bip_sap_user_en.pdf]  does not give a clue if this is possible.
    However,  Ingo Hilgefort stated in his book that it is at least possible to access  ABAP Functions, SAP Querries and SAP InfoSets.
    What is the minimum product portfolio and the necessary Version - Can I install the following products stand alone ?
    Crystal Reports 2008
    Integration for SAP Solutions
    Tomcat / Jaco
    or
    Must I need at minimum BO Edge  and must install the CMS Server ?
    Thank You
    Martin

    HI,
    What kind of SAP/CRM ( or SAP / ERP ) Objects can be accessed with the Integration for
    SAP Solutions ?
    here is also a blog about this:
    /people/ingo.hilgefort/blog/2008/03/23/businessobjects-and-sap-part-4
    However, Ingo Hilgefort stated in his book that it is at least possible to access ABAP Functions, SAP Querries and SAP InfoSets.
    >> correct. It is also in the Installation Guide / User Guide for the SAP Integration kit. You can use ABAP Functions, ABAP / SAP Queries, InfoSets, Tables
    What is the minimum product portfolio and the necessary Version - Can I install the following products stand alone ?
    Crystal Reports 2008
    BusinessObjects Integration for SAP Solutions
    BusinessObjects Edge or BusinessObjects Enterprise
    Ingo

  • 2.1 RC1 - Performance problem, typing lag in EA2 and RC1

    Hi!
    I have searched and searched the forum but cannot find any reference to this specific issue.. Please accept my apologies if this has already been posted as I can't believe there are no other occurences of this out there
    On moving from 1.5.5 to either 2.1 EA2 or RC1, I experience massive performance issues making SQL Developer unusable.
    Basically, just typing into a SQL worksheet consumes most of my machine CPU and results in a huge amount of lag: typing a simple SELECT statement will take 10 to 20 seconds just for the text to catch up with my typing! It's infuriating!
    I've disabled all of the Code Insight options, ensure the 'Select default path to look for scripts' field is empty and I've tried both the JRE inclusive and exclusive versions all with the same result. If I fire up 1.5.5 I'm immediately back in business with no lag between typing and the display.
    My laptop is well spec'd - XP Pro, 2Ghz dual core, 2 GB RAM
    Any thoughts other than what I've tried above?
    Many thanks in advance!
    Edited by: user4523743 on 07-Dec-2009 03:21

    Okay, this is weird. I can get around this by setting the 'Select default path for scripts' preference to something other than blank!
    I'm wondering if this is because a group policy sets the default homedrive / My Documents folder to something on a network share. Could it be that having this value blank is causing SQL Developer to poll this share, and therefore the network, causing the performance issue?
    As it is, setting the value to something on the local drive (C:\) seems to fix it - contrary to what other posts have had to say on the matter of this preference!

  • A technique problem about integration on sap and pos equipment

    hi~everyone
       I have two questions to need  every's friend help.
       (1) I have some business data,i want to create the idoc file by ABAP and save   
            the idoc file to application server.
       (2) The external system create a idoc file and send it to sap application server,
            i want to read the idoc file and create a invoice by idoc message .
      I want to come true the function hereinbefore,what should i do ? give some code
       or cues please ,thank you evey much !

    Hi,
    IDOCs are used to direct communicate between two systems, sometime there may be a middleware(like XI) involved.
    Why do you want to save the data to applications server then?
    Regards,
    Atish

  • Performanceproblem with WL6.1 JDBC and Oracle

    Hi!
    I have a big performance-problem using wl6.1, jdbc and oracle.
    My Server is sending a Vector with NodeBeans via JDBC to a OracleDB. The answer
    comes immediately with the timeout.
    Why is the EJB waiting for the timeout?
    Is it a problem of configuration?
    Thanks
    Thomas

    Hi Sree,
    here I send You the class with the main problem. The call queryDataSet.refresh();
    is the timeconsuming part.
    We are using WL6.1sp2, Oracle 8.1.7 with classes12. In our environment we can
    not use connectionpools.
    It comes back with a timeout AND the data. Calling it again immediately it takes
    2/3 of the time the timeout is set.
    Do You have a fine solution?
    Thanks
    Thomas
    package ppif.db;
    import java.rmi.*;
    import ppif.bo.*;
    import java.util.*;
    import com.borland.dx.dataset.*;
    import com.borland.dx.sql.dataset.*;
    import java.math.*;
    public class connector {
         private ppif.mapping.NodeDescriptions nodeDescriptions;
         public connector(ppif.mapping.NodeDescriptions nodeDescriptions) {
              this.nodeDescriptions = nodeDescriptions;
         public connector() {
              this.nodeDescriptions = nodeDescriptions;
         public Vector fetchNodes(ppif.bo.Node filterNode) throws RemoteException {
              String childClassName = null;
              Vector nodeVector = new Vector();
              Node node = null;
              if (nodeDescriptions == null) throw new RemoteException("nodeDescriptions ==
    null");
              // childClassName zu gegebenen filterNode ermitteln
              if (filterNode.getClassName().equals("DefaultRoot")) childClassName = "ST";
              if (filterNode.getClassName().equals("PrismaProjectsRoot")) childClassName =
    "PrismaProjects";
              if (childClassName == null) throw new RemoteException("Zum übergebenen filterNode
    wurde kein chlidClassName gefunden.");
              if (nodeDescriptions.getNodeDescription(childClassName) == null) throw new RemoteException("Für
    die ChildClass "+childClassName+" ist kein Mapping definiert.");
              // Vector mit Child-Knoten erzeugen durch DB-Zugriff
              ppif.mapping.NodeDescription nodeDescription = nodeDescriptions.getNodeDescription(childClassName);
    Database database = new Database();
    ParameterRow parameterRow = null;
    Column column = null;
    QueryDataSet queryDataSet = new QueryDataSet();
    // DB-Connect generieren
    database.setConnection(new com.borland.dx.sql.dataset.ConnectionDescriptor(nodeDescription.dbUrl,
    nodeDescription.dbUser, nodeDescription.dbPassword, false, "oracle.jdbc.driver.OracleDriver"));
    int queryCount = 0;
    // Schleife über alle queries der childKlasse, gemäss Mapping
    for (Iterator it = nodeDescription.mappingQueries.iterator(); it.hasNext();
                   ppif.mapping.MappingQuery mappingQuery = (ppif.mapping.MappingQuery)it.next();
                   String queryString = mappingQuery.getQueryString();
                   parameterRow = mappingQuery.getParameterRow();
                   // ParameterRow mit Input-Werten füllen
                   for (int i=0; i<parameterRow.getColumnCount(); i++) {
                        String columnName = parameterRow.getColumn(i).getColumnName();
                        String filterAttributeName = columnName;
                        Object filterAttributeValue = filterNode.getAttribute(filterAttributeName).getVal1();
                        switch (parameterRow.getColumn(i).getDataType()) {
                             case com.borland.dx.dataset.Variant.BIGDECIMAL:
                                  BigDecimal val = (BigDecimal)filterAttributeValue;
                                  parameterRow.setBigDecimal(i, val);
                                  break;
                             case com.borland.dx.dataset.Variant.STRING:
                                  parameterRow.setString(i, (String)filterAttributeValue);
                                  break;
                             default:
                                  throw new RemoteException("Unbekannter Datentyp");
                   // Query generieren
                   queryDataSet.setQuery(new com.borland.dx.sql.dataset.QueryDescriptor(database,
    queryString, parameterRow, false, Load.ALL));
                   // DB-Verbindung öffnen
                   queryDataSet.open();
                   // Rolle setzen
                   if (nodeDescription.dbRole != null) {
                        java.sql.Statement setRole;
                        String roleName = nodeDescription.dbRole;
                        String rolePwd = nodeDescription.dbRolePassword;
                        try {
                             java.sql.Connection jdbcConnection = database.getJdbcConnection();
                             setRole = jdbcConnection.createStatement();
                             setRole.execute("SET ROLE " + roleName + " IDENTIFIED BY " + rolePwd);
                        } catch (java.sql.SQLException exception) {
                             throw new RemoteException(exception.getMessage());
                        } catch (Exception exception) {
                             throw new RemoteException(exception.getMessage());
                   // Query ausführen
                   queryDataSet.refresh();
                   // Datensätze in Node-Objekte umwandeln
                   int columnCount = queryDataSet.getColumnCount();
                   String columnArray[];
                   columnArray = queryDataSet.getColumnNames(columnCount);
                   for(queryDataSet.first(); queryDataSet.inBounds(); queryDataSet.next()) {
                        Node rNode = nodeDescriptions.createNodeInstance(childClassName);
                        // Abfrageergebnissen in eine Attributliste übertragen
                        for (int i=0; i<columnCount; i++) {
                        nodeVector.add(rNode);
                   queryCount++;
              return nodeVector;
         public ppif.mapping.NodeDescriptions getNodeDescriptions() {
              return nodeDescriptions;
         public void setNodeDescriptions(ppif.mapping.NodeDescriptions nodeDescriptions)
              this.nodeDescriptions = nodeDescriptions;
    "Sree Bodapati" <[email protected]> wrote:
    hi Thomas,
    please post more detail on what your code is doing. if possible a code
    snippet/error messages/thread dump
    sree
    "Thomas Eberhard" <[email protected]> wrote in message
    news:[email protected]...
    Hi!
    I have a big performance-problem using wl6.1, jdbc and oracle.
    My Server is sending a Vector with NodeBeans via JDBC to a OracleDB.
    The
    answer
    comes immediately with the timeout.
    Why is the EJB waiting for the timeout?
    Is it a problem of configuration?
    Thanks
    Thomas

Maybe you are looking for

  • How to import a message class in a fm

    Hello! I' m programminn a FM. I need to use a message class that i have created. How i can import it? I have to return a strcuct which i have to copy my message class. My code:           DETAIL_RETURN-ID =           DETAIL_RETURN-NUMBER =           D

  • TS1369 These methods did not work for me, did anyone find another usefull way?

    So i have tried all the methods that apple provides when your Ipod is not being detected by Itunes or my computer. Does anyone else know what to do or what i should try to figure this issue out??

  • Sound is in mono and 1 speaker

    hello everyone, I did an interview today and used a wireless mic with my sony pd170 video camera, when I logged in my footage it only played out of 1 speaker, is there a way to duplicate just the sound and put it on a new track so it will play out of

  • Having doubt on logical operators and,or .

    I have written a query like select ename from emp where mgr=7902 or sal=1600 and job='salesman'; The o/p of above query is ename smith I unable to understand the o/p it has given for me . the table is EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO 7369

  • There is no option to download lightroom in my creative cloud start menu?

    I am trying to download lightroom and there is no app in my main menu?!? confused