KM Performance with DB Repository of 2TB

Hello,
our customer is planing to store large amount of small documents (500kb - 1 MB). Calculations show that we would hit 2,2 TB (Terrabyte) in a timeframe of a year. Is KM capable to store such a huge amount? Until now we host around 22GB in a DB repository.
The limit of 1000 files per folder in order to have a good performance is known and will be used. But I don't think KM can handle that much data. Anybody that can prove me wrong?
Thank you
Marius Quadflieg
Edited by: Marius Quadflieg on Apr 21, 2010 4:34 PM

I don't think that storing it will be any problem at all.
What will much more important is what usage patterns will be there. The only way to really find out is to test.

Similar Messages

  • Upgrade of database with GC repository resides

    I have GC 10.2.0.3 running with the repository stored in a 9.2.0.8 database.
    I would like to upgrade the database to 10.2.0.3 using dbua if possible. When the dbua sees an upgrade to 10.2 it creates a new SYSMAN schema, but I already have one that the Grid Control install created when I used the option "install into an existing database."
    I've searched MetaLink on how to do this, and created a SR but am having trouble getting support to understand what I'm attempting.
    I'm open to anything, create a new database, etc. The only thing I want to be sure of is to not loose the information that I've already established in Grid, and I'm assuming it's stored in the SYSMAN schema.

    Totally fresh clean 10gR2 database on a different host and platform.
    2.3 Export/Import
    If the source and destination database is non-10g, then export/import is the only option for cross platform database migration.
    For performance improvement of export/import, set higher value for BUFFER and RECORDLENGTH . Do not export to NFS as it will slow down the process considerably.
    Direct path can be used to increase performance. Note – As EM uses VPD, conventional mode will only be used by Oracle on tables where policy is defined.
    Also User running export should have EXEMPT ACCESS POLICY privilege to export all rows as that user is then exempt from VPD policy enforcement. SYS is always exempted from VPD or Oracle Label Security policy enforcement, regardless of the export mode, application, or utility that is used to extract data from the database.
    2.3.1 Prepare for Export/Import
    * Mgmt_metrics_raw partitions check
    select table_name,partitioning_type type,
    partition_count count, subpartitioning_type subtype from
    dba_part_tables where table_name = 'MGMT_METRICS_RAW'
    If MGMT_METRICS_RAW has more than 3276 partitions please see Bug 4376351 – This bug is fixed in 10.2. Old partitions should be dropped before export/import to avoid this issue – This will also speed up the export/import process.
    To drop old partitions - run exec emd_maintenance.partition_maintenance
    (This needs shutdown of OMS and set job_queue_processes to 0 during the time drop partitions is run) – Please refer to EM Performance Best practices document for more details on usage.
    Workaround to avoid bug 4376351 is to export mgmt_metrics_raw in conventional mode – This is needed only if drop partition is not run. Note - drop old partitions run is highly recommended.
    * Shutdown OMS instances and prepare for migration
    Shutdown OMS, set job queue_processes to 0 and remove dbms jobs using commands
    connect /as sysdba
    alter system set job_queue_processes=0;
    connect sysman/<password>
    @ORACLE_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_remove_dbms_jobs.sql
    2.3.2 Export
    Before running export make sure that NLS_LANG variable is same as database characterset. For example after running this query
    SQL> select value from nls_database_parameters where PARAMETER='NLS_CHARACTERSET';
    VALUE
    WE8ISO8859P1
    Then NLS_LANG environment variable should be set to AMERICAN_ AMERICA. WE8ISO8859P1
    * Export data
    exp full=y constraints=n indexes=n compress=y file=fullem102_1.dmp log=fullem102exp_1.log
    Provide system username and password when prompted.
    Verify the log file and make sure that no characterset conversion happens (this line should not be present in log file “possible charset conversion”)
    * Export without data and with constraints
    exp full=y constraints=y indexes=y rows=n file=fullem102_2.dmp log=fullem102exp_2.log
    Provide system username and password when prompted
    2.3.3 Import
    Before running import make sure that NLS_LANG variable is same as database characterset.
    * Run RepManager to drop target repository (if target database has EM repository installed)
    cd ORACLE_HOME/ sysman/admin/emdrep/bin
    RepManager repository_host repository_port repository_SID -sys_password password_for_sys_account -action drop
    * Pre-create the tablespaces and the users in target database
    @ORACLE_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_create_tablespaces.sql
    @ORACLE_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_create_repos_user.sql
    @ORACLE_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_pre_import.sql
    For first 2 scripts, we need to provide input arguments when prompted or you can provide them on command line for example
    @ORACLE_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_create_tablespaces.sql MGMT_TABLESPACE <path>/mgmt.dbf <size of mgmt.dbf> <aotoextend size> MGMT_ECM_DEPOT_TS <path>/mgmt_ecm_depot1.dbf <size of mgmt_ecm_depot1.dbf> <aotoextend size> MGMT_TABLESPACE <path>/mgmt.dbf <size of mgmt.dbf> <aotoextend size>
    @/scratch/nagrawal/OracleHomes/oms10g/sysman/admin/emdrep/sql/core/latest/admin/admin_create_repos_user.sql sysman <sysman password> MGMT_TABLESPACE TEMP CENTRAL ON
    * Import data -
    imp constraints=n indexes=n FROMUSER=sysman TOUSER=sysman buffer=2097152 file=fullem102_1.dmp log=fullem102imp_1.log
    * Import without data and with constraints -
    imp constraints=y indexes=y FROMUSER=sysman TOUSER=sysman buffer=2097152 rows=n ignore=y file=fullem102_2.dmp log=fullem102imp_2.log
    Verify the log file and make sure that no characterset conversion happens (this line should not be present in log file “possible charset conversion”)
    2.3.4 Post Import EM Steps
    * Please refer to Section 3.1 for Post Migration EM Specific Steps
    3 Post Repository Migration Activities
    3.1 Post Migration EM Steps
    Following EM specific Steps should be carried out post migration -
    * Recompile all invalid objects in sysman schema using
    connect sysman/<password>
    @ORACLE_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_recompile_invalid.sql
    * Run post plugin steps to recompile any invalids, create public synonyms, create other users, enable VPD policy, repin packages-
    connect sysman/<password>
    @ORACLE_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_create_synonyms.sql
    @ORACLE_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_post_import.sql
    Provide <ORACLE_HOME/sysman/admin/emdrep/sql for em_sql_root
    SYSMAN for em_repos_user
    MGMT_TABLESPACE for em_tablespace_name
    TEMP for em_temp_tablespace_name
    Note – The users created by admin_post_import will have same passwords as username.
    Check for invalid objects – compare source and destination schemas for any discrepancy in counts and invalids.
    * Following queues are not enabled after running admin_post_import.sql as per EM bug 6439035, enable them manually by running
    connect sysman/<password>
    exec DBMS_AQADM.START_QUEUE( queue_name=> 'MGMT_TASK_Q');
    exec DBMS_AQADM.START_QUEUE( queue_name=> 'MGMT_PAF_RESPONSE_Q');
    exec DBMS_AQADM.START_QUEUE( queue_name=> 'MGMT_PAF_REQUEST_Q');
    exec DBMS_AQADM.START_QUEUE( queue_name=> 'MGMT_LOADER_Q');
    * Please check for context using following query
    connect sysman/<password>
    select * from dba_context where SCHEMA='SYSMAN';
    If any of following context is missing create them using
    connect sysman/<password>
    create or replace context storage_context sing storage_ui_util_pkg;
    create or replace context em_user_context sing setemusercontext;
    * Partition management
    Check if necessary partitions are created so that OMS does not run into problems for loading into non-existent partitions (This problem can come only if there are gap of days between export and import) –
    exec emd_maintenance.analyze_emd_schema('SYSMAN');
    This will create all necessary partitions up to date.
    * Submit EM dbms jobs
    Reset back job_queue_processes to original value and resubmit EM dbms jobs
    connect /as sysdba
    alter system set job_queue_processes=10;
    connect sysman/<password>
    @ORACLE_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_submit_dbms_jobs.sql
    * Update OMS properties and startup OMS
    Update emoms.properties to reflect the migrated repository - oracle.sysman.eml.mntr.emdRepConnectDescriptor
    Update host name, port with the correct value and start the OMS.
    * Relocate “Management Services and Repository” target
    If “Management Services and repository” target needs to be migrated to the destination host, delete the old "Management Services and Repository" target. Add it again with same name "Management Services and Repository" on agent running on new machine.
    * Run following sql to verify the repository collections are enabled for emrep target
    SELECT
    target_name,
    metric_name,
    task.task_id,
    task.interval,
    task.error_message,
    trunc((mgmt_global.sysdate_utc-next_collection_timestamp )/1440) delay
    from mgmt_collection_metric_tasks mtask,
    mgmt_collection_tasks task,
    mgmt_metrics met,
    mgmt_targets tgt
    where met.metric_guid = mtask.metric_guid AND
    tgt.target_guid = mtask.target_guid AND
    mtask.task_id = task.task_id(+) AND
    met.source_type > 0 AND
    met.source != ' '
    AND tgt.target_type='oracle_emrep'
    ORDER BY mtask.task_id;
    This query should result same records in both source and destination database. If you find any of collections missing in destination database, run following to schedule them in destination database
    DECLARE
    traw RAW(16);
    tname VARCHAR2(256);
    ttype VARCHAR2(64);
    BEGIN
    SELECT target_name, target_type, target_guid
    INTO tname, ttype, traw
    FROM mgmt_targets
    WHERE target_type = 'oracle_emrep';
    mgmt_admin_data.add_emrep_collections(tname,ttype,traw);
    END;
    * Discover/relocate Database and database Listener targets
    Delete the old repository database target and listener and rediscover the target database and listener in EM

  • Cannot perform operation, the repository is not presented on any servers.

    Hi
    I have pool : lanz
    and under this pool there is 2 server
    Now
    there is 2 nfs share
    nfs-pool->(Main one)
    nfs:/mnt/vm/nfs/ovm
    nfs:/mnt/pool/serverpool/serverpool
    nfs:/mnt/pool/repository/myrepo
    now I have added 2 nfs share into repository.
    nfs:/mnt/pool/serverpool/serverpool : Server A
    nfs:/mnt/pool/repository/myrepo
    Server has permission on nfs:/mnt/pool/serverpool/serverpool
    but when i am trying to give permission Server B on nfs:/mnt/pool/repository/myrepo
    it does not show up on Server list from : home->repository.
    so When i try to create vm under Serer B, the Repository is always empty.
    I did refresh nfs:/mnt/pool/repository/myrepo with Server B
    but still it does not allow Server B to use this repo
    is there any special thing i need to ??
    when i try to delete nfs:/mnt/pool/repository/myrepo from repository i see the error :
    OVMRU_002006E: nfs:/mnt/pool/serverpool/serverpool - Cannot perform operation, the repository is not presented on any servers.
    Please advise ..

    I haven't found a way to resync. Doing a rediscover doesn't do a thing. I do see in the ovs-agent.log where the server does resync with the manager but it doesn't make any difference when it comes to the repository.
    Anyone know how to reset the DB for the manager and then maybe it can resync with the servers?
    I was hoping for a way to force sync with the VM 3.0 servers and the database. The problem appears to be that the maintenance of the repository is a manager function now and just uses the servers to do the work. Somehow the servers are out of sync with the manager. The servers think they are still connected properly but the manager thinks they are disconnected, with the exception that if you highlight the repository in the manager it shows the servers are currently presented the repo. When I try to do anything with the repo the manager says no servers are available to do the work.
    Thanks
    Bill

  • OPA work with database repository

    Does OPA work with database repository?

    OPA does not provide direct database connectivity. Your application retrieves all necessary data, then passes it to OPA for performing auditable determinations and calculations.
    Davin Fifield

  • Report  performance with Hierarchies

    Hi
    How to improve query performance with hierarchies. We have to do lot of navigation's in the query and the volume of data size very big.
    Thanks
    P G

    HI,
    chk this:
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    Query Performance – Is "Aggregates" the way out for me?
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    ° the OLAP cache is architected to store query result sets and to give all users access to those result sets.
    If a user executes a query, the result set for that query’s request can be stored in the OLAP cache; if that same query (or a derivative) is then executed by another user, the subsequent query request can be filled by accessing the result set already stored in the OLAP cache.
    In this way, a query request filled from the OLAP cache is significantly faster than queries that receive their result set from database access
    ° The indexes that are created in the fact table for each dimension allow you to easily find and select the data
    see http://help.sap.com/saphelp_nw04/helpdata/en/80/1a6473e07211d2acb80000e829fbfe/content.htm
    ° when you load data into the InfoCube, each request has its own request ID, which is included in the fact table in the packet dimension.
    This (besides giving the possibility to manage/delete single request) increases the volume of data, and reduces performance in reporting, as the system has to aggregate with the request ID every time you execute a query. Using compressing, you can eliminate these disadvantages, and bring data from different requests together into one single request (request ID 0).
    This function is critical, as the compressed data can no longer be deleted from the InfoCube using its request IDs and, logically, you must be absolutely certain that the data loaded into the InfoCube is correct.
    see http://help.sap.com/saphelp_nw04/helpdata/en/ca/aa6437e7a4080ee10000009b38f842/content.htm
    ° by using partitioning you can split up the whole dataset for an InfoCube into several, smaller, physically independent and redundancy-free units. Thanks to this separation, performance is increased when reporting, or also when deleting data from the InfoCube.
    see http://help.sap.com/saphelp_nw04/helpdata/en/33/dc2038aa3bcd23e10000009b38f8cf/content.htm
    Hope it helps!
    tHAK YOU,
    dst

  • Are there Issues with poor performance with Maverick OS,

    Are there issues with slow and reduced performance with Mavericks OS

    check this
    http://apple.stackexchange.com/questions/126081/10-9-2-slows-down-processes
    or
    this:
    https://discussions.apple.com/message/25341556#25341556
    I am doing a lot of analyses with 10.9.2 on a late 2013 MBP, and these analyses generally last hours or days. I observed that Maverick is slowing them down considerably for some reasons after few hours of computations, making it impossible for me to work with this computer...

  • Performance with the new Mac Pros?

    I sold my old Mac Pro (first generation) a few months ago in anticipation of the new line-up. In the meantime, I purchased a i7 iMac and 12GB of RAM. This machine is faster than my old Mac for most Aperture operations (except disk-intensive stuff that I only do occasionally).
    I am ready to purchase a "real" Mac, but I'm hesitating because the improvements just don't seem that great. I have two questions:
    1. Has anyone evaluated qualitative performance with the new ATI 5870 or 5770? Long ago, Aperture seemed pretty much GPU-constrained. I'm confused about whether that's the case anymore.
    2. Has anyone evaluated any of the new Mac Pro chips for general day-to-day use? I'm interested in processing through my images as quickly as possible, so the actual latency to demosaic and render from the raw originals (Canon 1-series) is the most important metric. The second thing is having reasonable performance for multiple brushed-in effect bricks.
    I'm mostly curious if anyone has any experience to point to whether it's worth it -- disregarding the other advantages like expandability and nicer (matte) displays.
    Thanks.
    Ben

    Thanks for writing. Please don't mind if I pick apart your statements.
    "For an extra $200 the 5870 is a no brainer." I agree on a pure cost basis that it's not a hard decision. But I have a very quiet environment, and I understand this card can make a lot of noise. To pay money, end up with a louder machine, and on top of that realize no significant benefit would be a minor disaster.
    So, the more interesting question is: has anyone actually used the 5870 and can compare it to previous cards? A 16-bit 60 megapixel image won't require even .5GB of VRAM if fully tiled into it, for example, so I have no ability, a priori, to prove to myself that it will matter. I guess I'm really hoping for real-world data. Perhaps you speak from this experience, Matthew? (I can't tell.)
    Background work and exporting are helpful, but not as critical for my primary daily use. I know the CPU is also used for demosaicing or at least some subset of the render pipeline, because I have two computers that demonstrate vastly different render-from-raw response times with the same graphics card. Indeed, it is this lag that would be the most valuable of all for me to reduce. I want to be able to flip through a large shoot and see each image at 100% as instantaneously as possible. On my 2.8 i7 that process takes about 1 second on average (when Aperture doesn't get confused and mysteriously stop rendering 100% images).
    Ben

  • Performance with Boot Camp/Gaming?

    Hi,
    I just acquired a MBP/2GHz IntelCD/2GB RAM/100GB/Superdrive, with Applecare. Can anyone comment about the performance with
    Boot Camp -- running Windows XP SP2, and what the gaming graphics are like?
    Appreciate it, thanks...
    J.
    Powerbook G4 [15" Titanium - DVI] Mac OS X (10.4.8) 667MHz; 1GB RAM; 80GB

    Well, I didn't forget to mention what I did not know yet.... So that's not exactly correct..
    As per Apple's support page, http://support.apple.com/specs/macbookpro/MacBook_Pro.html
    My new computer does have 256MB of video memory...

  • Performance with external display

    Hello,
    when I'm connecting my 19'' TFT to the MacBook the performance (with the same applications runnig) is realy bad. It tooks longer to switch between apps and if I don't use an app for some time, it can took up to 30 sec to "reactivate" the app.
    Because the HD is working when I switch apps, it looks like the OS is swapping. My question: Would it help to upgrade the MacBook to 2GB ram? AFAIC the intel card uses shared memory.
    Thanks for your help
    Till
    MacBook 1.83 GHz/1GB   Mac OS X (10.4.8)  

    How much RAM do you have? Remember that the MB does not have dedicated VRAM like some computers do and that it uses the system RAM to drive the graphics chipset.
    I use my MB with the mini-DVI to DVI adapter to drive a 20" widescreen monitor without any of the problems that you describe, but I have 2GB of RAM. If you only have the stock 512MB of RAM, that may be part of what you are seeing.

  • Performance with LinkedList in java

    Hello All,
    Please suggest me any solution to further improve the performance with List.
    The problem description is as follows:
    I have a huge number of objects, lets say 10,000 , and need to store the objects in such a collection so that if i want to store an object at some particular index i , I get the best performance.
    I suppose as I need indexed based access, using List makes the best sense as Lists are ordered.
    Using LinkedList over ArrayList gives the better performance in the aforementioned context.
    Is there are way I can further improve the performance of LinkedList while solving this particular problem
    OR
    Is there any other index based collection using that i get better performance than LinkedList?
    Thanks in advance

    The trouble with a LinkedList as implemented in the Java libraries is that if you want to insert at index 100, it has no option but to step through the first 100 links of the list to find the insert point. Likewise is you retrieve by index. The strength of the linked list approach is lost if you work by index and the List interface gives no other way to insert in the middle of the list. The natural interface for a linked list would include an extended Iterator with methods for insert and replace. Of course LinkedLists are fine when you insert first or last.
    My guess would be that if your habitual insertion point was half way or more through the list then ArrayList would serve you better especially if you leave ample room for growth. Moving array elements up is probably not much more expensive, per element, than walking the linked list. Maybe 150% or thereabouts.
    Much depends on how you retrieve, and how volatile the list is. Certainly if you are a read-mostly situation and cannot use an iterator then a LinkedList won't suit.

  • Performance with dates in the where clause

    Performance with dates in the where clause
    CREATE TABLE TEST_DATA
    FNUMBER NUMBER,
    FSTRING VARCHAR2(4000 BYTE),
    FDATE DATE
    create index t_indx on test_data(fdata);
    query 1: select count(*) from TEST_DATA where trunc(fdate) = trunc(sysdate);
    query 2: select count(*) from TEST_DATA where fdate between trunc(sysdate) and trunc(SYSDATE) + .99999;
    query 3: select count(*) from TEST_DATA where fdate between to_date('21-APR-10', 'dd-MON-yy') and to_date('21-APR-10 23:59:59', 'DD-MON-YY hh24:mi:ss');
    My questions:
    1) Why isn't the index t_indx used in Execution plan 1?
    2) From the execution plan, I see that query 2 & 3 is better than query 1. I do not see any difference between execution plan 2 & 3. Which one is better?
    3) I read somewhere - "Always check the Access Predicates and Filter Predicates of Explain Plan carefully to determine which columns are contributing to a Range Scan and which columns are merely filtering the returned rows. Be sceptical if the same clause is shown in both."
    Is that true for Execution plan 2 & 3?
    3) Could some one explain what the filter & access predicate mean here?
    Thanks in advance.
    Execution Plan 1:
    SQL> select count(*) from TEST_DATA where trunc(fdate) = trunc(sysdate);
    COUNT(*)
    283
    Execution Plan
    Plan hash value: 1486387033
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 9 | 517 (20)| 00:00:07 |
    | 1 | SORT AGGREGATE | | 1 | 9 | | |
    |* 2 | TABLE ACCESS FULL| TEST_DATA | 341 | 3069 | 517 (20)| 00:00:07 |
    Predicate Information (identified by operation id):
    2 - filter(TRUNC(INTERNAL_FUNCTION("FDATE"))=TRUNC(SYSDATE@!))
    Note
    - dynamic sampling used for this statement
    Statistics
    4 recursive calls
    0 db block gets
    1610 consistent gets
    0 physical reads
    0 redo size
    412 bytes sent via SQL*Net to client
    380 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed
    Execution Plan 2:
    SQL> select count(*) from TEST_DATA where fdate between trunc(sysdate) and trunc(SYSDATE) + .99999;
    COUNT(*)
    283
    Execution Plan
    Plan hash value: 1687886199
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 9 | 3 (0)| 00:00:01 |
    | 1 | SORT AGGREGATE | | 1 | 9 | | |
    |* 2 | FILTER | | | | | |
    |* 3 | INDEX RANGE SCAN| T_INDX | 283 | 2547 | 3 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - filter(TRUNC(SYSDATE@!)<=TRUNC(SYSDATE@!)+.9999884259259259259259
    259259259259259259)
    3 - access("FDATE">=TRUNC(SYSDATE@!) AND
    "FDATE"<=TRUNC(SYSDATE@!)+.999988425925925925925925925925925925925
    9)
    Note
    - dynamic sampling used for this statement
    Statistics
    7 recursive calls
    0 db block gets
    76 consistent gets
    0 physical reads
    0 redo size
    412 bytes sent via SQL*Net to client
    380 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows
    Execution Plan 3:
    SQL> select count(*) from TEST_DATA where fdate between to_date('21-APR-10', 'dd-MON-yy') and to_dat
    e('21-APR-10 23:59:59', 'DD-MON-YY hh24:mi:ss');
    COUNT(*)
    283
    Execution Plan
    Plan hash value: 1687886199
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 9 | 3 (0)| 00:00:01 |
    | 1 | SORT AGGREGATE | | 1 | 9 | | |
    |* 2 | FILTER | | | | | |
    |* 3 | INDEX RANGE SCAN| T_INDX | 283 | 2547 | 3 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - filter(TO_DATE('21-APR-10','dd-MON-yy')<=TO_DATE('21-APR-10
    23:59:59','DD-MON-YY hh24:mi:ss'))
    3 - access("FDATE">=TO_DATE('21-APR-10','dd-MON-yy') AND
    "FDATE"<=TO_DATE('21-APR-10 23:59:59','DD-MON-YY hh24:mi:ss'))
    Note
    - dynamic sampling used for this statement
    Statistics
    7 recursive calls
    0 db block gets
    76 consistent gets
    0 physical reads
    0 redo size
    412 bytes sent via SQL*Net to client
    380 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed

    Hi,
    user10541890 wrote:
    Performance with dates in the where clause
    CREATE TABLE TEST_DATA
    FNUMBER NUMBER,
    FSTRING VARCHAR2(4000 BYTE),
    FDATE DATE
    create index t_indx on test_data(fdata);Did you mean fdat<b>e</b> (ending in e)?
    Be careful; post the code you're actually running.
    query 1: select count(*) from TEST_DATA where trunc(fdate) = trunc(sysdate);
    query 2: select count(*) from TEST_DATA where fdate between trunc(sysdate) and trunc(SYSDATE) + .99999;
    query 3: select count(*) from TEST_DATA where fdate between to_date('21-APR-10', 'dd-MON-yy') and to_date('21-APR-10 23:59:59', 'DD-MON-YY hh24:mi:ss');
    My questions:
    1) Why isn't the index t_indx used in Execution plan 1?To use an index, the indexed column must stand alone as one of the operands. If you had a function-based index on TRUNC (fdate), then it might be used in Query 1, because the left operand of = is TRUNC (fdate).
    2) From the execution plan, I see that query 2 & 3 is better than query 1. I do not see any difference between execution plan 2 & 3. Which one is better?That depends on what you mean by "better".
    If "better" means faster, you've already shown that one is about as good as the other.
    Queries 2 and 3 are doing different things. Assuming the table stays the same, Query 2 may give different results every day, but the results of Query 3 will never change.
    For clarity, I prefer:
    WHERE     fdate >= TRUNC (SYSDATE)
    AND     fdate <  TRUNC (SYSDATE) + 1(or replace SYSDATE with a TO_DATE expression, depending on the requirements).
    3) I read somewhere - "Always check the Access Predicates and Filter Predicates of Explain Plan carefully to determine which columns are contributing to a Range Scan and which columns are merely filtering the returned rows. Be sceptical if the same clause is shown in both."
    Is that true for Execution plan 2 & 3?
    3) Could some one explain what the filter & access predicate mean here?Sorry, I can't.

  • Can i partition my HD now for fcpx since i didn't do it when i downloaded it. my performance with fcpx is terrible terrible performance

    When i downloaded FCPX i was told I really didn't need to partition my hard drive. But my performance with fcpx is terrible from rendering constantly to waiting for a minute after you click for it to do it. I turn off the rendring, I transcoded the media to pro res went to file and hit transcode. but still i mean it crashes, buggy, suggy and it takes hours to do anything. I have my media on a external drive but i did screw up and put some media on internal drive. iMac OSX 10.7.2  3.06 GHZ Core 2 Duo. EX HD LaCie on a 800 firewire.

    Partitioning the internal drive is - imho! - completely useless for UNIX-systems such as MacOS, due to excessive usage of (hidden!) temp-files.
    And, especially a too small int. HDD partition for the system will brake any app 'til a full halt.
    plus, 10.7. has some issues with managing Timemachine backups when it comes to large files as video - again, if the intHDD/OS drive is too small, it gets iffy.
    at last >30GBs free, only codecs and material as intended by Cupertino = no probs here (on a much smaller/older set-up).

  • Slow Performance with Business Rules

    Hello,
    Has anyone ever had slow performance with business rules? For example, I attached a calc script to a form and it ran for 20 seconds. I made an exact replica of the calc script in a business rules and it took 30 seconds to run. Also, when creating / modifying business rules in EAS it takes a long time to open or save or attach security - any ideas on things to improve this performance?
    Thanks!

    If you are having issues with performance of assigning access then I am sure there was patch available, it was either a HSS patch or planning patch.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Error while registering user with owb repository.

    Hi all,
    I am using owb 10gr2.
    I am trying to register a user with owb repository. I am getting below error.
    ORA-00955: name is already used by an existing object
    But when i try to deploy the mapping, it is showing a message
    Cannot deploy PL/SQL maps to the target schema because it is not owned by the Control Center.
    I am sure the user is not registered with owb repository. But i don't know why it is showing 0RA - 00955 error. Can somebody tell me what is the possible error.
    vasu

    Hi,
    the user was created outside the owb ("... because it is not owned by the Control Center") direct in the database. You must first use the repository assistant (advanced options) to make him to a target user.
    Then you can register him in the owb as repository user with special rights.
    Regards,
    Detlef

  • JCO3: NullPointerException unregistering server with custom repository

    Hello,
    I have a JCO 2 project running on a J2EE server that we ported over to JCO3, and the ultimate issue is that when I
    stop the JCoServer in the application, it's throwing a null pointer exception.  Seemingly because the JCoServers
    are in a higher classloader, they are still there on restart, and then I get the NullPointerException trying to
    recreate the server as well.  I very well may be doing something totally wrong here...
    I'm creating the Data Provider, Repository, and Server as follows:
    ========================================================
    // This is a simple implementation of ServerDataProvider
    markviewDataProvider = new MarkviewServerDataProvider();
    if (!Environment.isServerDataProviderRegistered()){
      log.debug("Registering data provider");
      Environment.registerServerDataProvider(markviewDataProvider);
    else{
      log.error("Data provider already registered.");
      throw new IllegalStateException("Data provider already registered.  Please restart application server.");
    // Create the neccessary properties and assign to data provider
    // Their values are being read from a database
    serverProperties.setProperty(ServerDataProvider.JCO_GWSERV, rfcServerInfo.getGatewayServer());
    serverProperties.setProperty(ServerDataProvider.JCO_GWHOST, rfcServerInfo.getGatewayHost());
    serverProperties.setProperty(ServerDataProvider.JCO_PROGID, rfcServerInfo.getProgramId());
    serverProperties.setProperty(ServerDataProvider.JCO_CONNECTION_COUNT, maxConnections.toString());
    //XXX: We have to get this to work or else the server will not restart correctly
    //More on this later.
    //serverProperties.setProperty(ServerDataProvider.JCO_REP_DEST, "custom destination");
    markviewDataProvider.setServerProperties(serverName, serverProperties);
    // Get back the configured server from the factory.
    JCoServer server;
    try
      server = JCoServerFactory.getServer(serverName);
    catch(JCoException ex)
      throw new RuntimeException("Unable to create the server " + serverName
      + ", because of " + ex.getMessage(), ex);
    // rfcRepository is a singleton that contains
    // repository = JCo.createCustomRepository(name);  
    // and methods to add structures to the repository            
    server.setRepository(rfcRepository.exposeRepository());
    // add in the handlers, start the server
    ========================================================
    So...the first issue is this: if I set ServerDataProvider.JCO_REP_DEST to anything, the program won't run because
    it looks for the repository host connection info in a text file with that name, just like the example code does. 
    According to the javadoc, custom repositories don't have a destination, so it seems this property should be NULL. 
    Yet if I leave that property null, it works fine...until I need to stop the server (or, I get the same error when
    the above code runs again on restart, as the JCoServerFactory.getServer code runs in both places - tries to refresh
    a server that survived the restart as it failed to shut down due to the error).
    For completeness, the shutdown code essentially reads:
    =========================================================
    JCoServer server = JCoServerFactory.getServer(serverName);
    server.stop();
    =========================================================
    Then what happens is this:
    java.lang.NullPointerException
            at com.sap.conn.jco.rt.StandaloneServerFactory.compareServerCfg(StandaloneServerFactory.java:91)
            at com.sap.conn.jco.rt.StandaloneServerFactory.getServerInstance(StandaloneServerFactory.java:116)
            at com.sap.conn.jco.server.JCoServerFactory.getServer(JCoServerFactory.java:56)
            at com.markview.integrations.sap.gateway.sap2mv.JCoServerController.destroy(JCoServerController.java:310)
    I decompiled and debugged StandaloneServerFactory and I'm seeing this in compareServerCfg:
    =========================================================
            key = "jco.server.repository_destination";
            if(!current.getProperty(key).equals(updated.getProperty(key)))
                return CompareResult.CHANGED_RESTART;
            } else
                return CompareResult.CHANGED_UPDATE;
    =========================================================
    Clearly, if getProperty() returns null, the .equals is going to raise NullPointerException, so this appears to be
    the cause of that error.  In other words, if this code runs, I'm going to have this problem because the property
    needs to (seemingly) be null for my program to run at all.
    Any insights on how to fix this / work around this would be greatly appreciated!
    Thanks in advance,
    Corey

    <div style="width:58em;text-align:left">
    Whoa, a long posting and I'm not sure I read it thoroughly enough to fully understand your problem - so please be patient with me if any comments are silly...
    Your RFC server program provides an implementation for your own ServerDataProvider. If you reference a repository destination the default DestinationDataProvider implemented will look for a file name with the repository and extension .jcoDestination. If you don't like this behavior you simply need to provide your own implementation and register it, same as you did for the server data.
    According to the javadoc, custom repositories don't have a destination, so it seems this property should be NULL.
    Not sure where you found this. I'm assuming that you're talking about com.sap.conn.jco.JCoCustomRepository, which can of course be backed by a destination, i.e. see the comments in method setDestination:
    <div style="text-align:left">Set the destination for the remote queries.
    If the requested meta data objects are not available in the repository cache, i.e. not set by the application, the remote query to the specified destination will be started. Default value is null meaning the remote queries are not allowed.
    Note: As soon the destination is provided, the repository can contain mixed meta data - set statically by the application and requested dynamically from the ABAP backend.</div>
    I usually prefer using a connection to a backend providing the meta data instead of hard coding the meta data insertion into a custom repository. As the data is cached this introduces an acceptable overhead for retrieving meta data for long running server programs.
    Anyhow, when you create your custom repository via JCo.createCustomRepository(java.lang.String name) you obviously must give it a name. That's the name I'd expect in the server data as a reference for your repository (property ServerDataProvider.JCO_REP_DEST). However, if your custom repository is not backed by a complete online repository you obviously have to ensure that all meta data is already provided.
    So for testing it might be interesting to see if adding a JCoDestination to your custom repository helps. Another option might be to enable tracing and see what goes wrong...
    Cheers, harald
    </div>

Maybe you are looking for

  • Opening an image with PSE 11

    When i try to open an image in PSE 11 from Windows explorer by choosing "right click" - "open with Pse 11" the editor starts but does not open any file.  This was not the case in previous versions of Photshop Elements.  Any idea why this behaviour?  

  • Workflow Notifications not sending for AdHoc Role

    Hi, I am trying to send my workflow to multiple dynamic users, using the call createAdHocRole. Here is my code (I currently am using 2 user names for testing. This will be eventually be a variable) : +-- create role --+ apps.wf_directory.createAdHocR

  • %CRYPTO-4-RECVD_PKT_NOT_IPSEC: Rec'd packet not an IPSEC packet. (ip) vrf/dest_addr= /x.x.x.x, src_addr= x.x.x.x, prot= 47

    Hi , I am want to crerate a GREover IPsec Tunnel between Cisco ASR 1002 and cisco 3900 i am getting the below error. %CRYPTO-4-RECVD_PKT_NOT_IPSEC: Rec'd packet not an IPSEC packet. (ip) vrf/dest_addr= /x.x.x.x, src_addr= x.x.x.x, prot= 47 I have att

  • Type Tool Problems. Photoshop CC. on Windows 8

    Hi, When I try to use my type tool in photoshop the photograph I am trying to enter text on to goes completely black and I cannot see what I am typing.  I am sure the solution must be simple but I cannot find it.  I want to watermark my photos and it

  • Reader X breaks system

    In Windows 7, with Acrobat Pro 8 installed and operating nicely, the update to X disabled Firefox's ability to ready PDFs. Reader X will not uninstall, nor will it re-install. Suggestions? Thank you -- Tony