DSC 7.1 - Compact Database is SLOW

Another time DSC shows how slow it really is.
I need to compact my database since it is holding 5 months of data (configured to hold 90 days)
So I load MAX and ask it to compact the database. ERROR "unspecified error"
ok shut down the tag engine - I can live with no fresh data being logged for an hour.
Start the compact - oooh 50% almost immediatly
OK lots of activity on my SCSI RAID 0 (two Seagate 76gig 15krpm drives on adaptec dual channel raid board)
the computer is working pretty good (1% in 15 minutes) CPU usage (Dual 3.0ghz Xeon) is only 7%
file manager shows there are files being added (70+)
hmmm finally done after 1 hour 17 minutes
file manager still shows 5 months of data (was 15gig now 14.7gig)
use MAX to see how much data is there
-LOTS of SCSI activity
-ta da MAX shows all the data is still there
-MAX shows the "lifespan" of the data equals 90 days
This may seem to be another DSC "rant" and it probably is. I just want the software to do what it is advertised to do. If Compact does not work "don't include a menu for it" - If Archive takes 18 hours to extract some data "fix it" - If the database is corrupt "TELL US! don't make us look at CPU Usage to find out when it finishes"
OK I am done with the rant.
How do I remove the expired data?

Hello,
To remove expired data, you can do destructive archiving. Unfortunately it is slow and we are addressing this issue. I am sure you know how to do destructive archiving but to the benefit of others who might come across this discussion, here it is.
1. In MAX, under Historical Database, select Citadel 5 Universe. This brings up all the databases on the right handside.
2. Right click on the database you want to archive and select Archive.
3. In the wizard, select the data you want to archive and hit next.
4. Select the destination datbase you want the data to be archived and hit next
5. In this final step, please make sure that you have selected the option of destroying the data after it has been archived.
Regards,
Arun V
National Instruments

Similar Messages

  • User say ur database is slow how u slove this how identified reasons

    user say ur database is slow how u slove this how identified reasons

    blame the developers for that, a bad tuned query will reduce your performance. Also you may blame the users for that, a lot of concurrent users running inefficient sql statements will strangle your system.
    You may find further suggestions in your duplicated thread --> Some inter view Questions Please give prefect answer  help me

  • Database is slow due to indexes.

    Hi,
    Our database is slow and we are trying to identify duplicate indexes in a table within a schema where the index name may be different but the columns indexed are same. They may not necessarily be in the same order.Please update me the query which gives duplicate indexes.
    Thanks

    Dear,
    As Nicolas did emphasize it, you need facts and proof before claiming that your performance problem is due to indexes.
    Anyway, when speaking about duplicate indexes you need first to understand what is a duplicate index. Let me then show you a simple example about this
    mhouri>drop table t1;
    Table dropped.
    mhouri>create table t1(a number, b number, c varchar2(10), d date, x number, y  number);
    Table created.
    mhouri>create index t1_i1 on t1(a,b);
    Index created.I have created a simple table and added to it a simple composite index on (a,b)
    Now, I will create a duplicate index
    mhouri>create index t1_i2 on t1(a);
    Index created.You know why this index is considered as a duplicate one? simply because the first index I did create has its leading column = a and hence the second index t1_i2 can be covered by the first index t1_i1.
    And what about the following index
    mhouri>create index t1_i3 on t1(b,a);
    Index created.is it a duplicate index? the answer is NO; it is not a duplicate one. Because there is no index starting with the couple (b,a). However, if you have the intention to create an index say t1_i4(b) then do not do it because it will be covered by the index t1_i3(b,a).
    Finally, you can use simple select as the following one
    define m_table_name    = &m_table 
    set verify off
    set linesize 100
    select substr(uc1.table_name,1,25)  table_name
          ,substr(uc1.index_name,1,30)  index_name
          ,substr(uc1.column_name,1,10) column_name
          ,uc1.column_position          column_pos                
    from user_ind_columns uc1  
    where uc1.table_name   = upper('&m_table_name')
    order by          
        uc1.index_name  
       ,uc1.column_position
       ,uc1.column_position
    ;which will give you a list of existing indexes per input table together with their columns and the position of those columns. Based on your knowledge of what a duplicate index is you can analyse and act accordingly.
    The above select when executed against our current table t1 gives the following picture
    mhouri>start c:\red-index.sql
    Enter value for m_table: t1
    TABLE_NAME                INDEX_NAME                     COLUMN_NAM COLUMN_POS                     
    T1                        T1_I1                          A                   1                     
    T1                        T1_I1                          B                   2                     
    T1                        T1_I2                          A                   1                     
    T1                        T1_I3                          B                   1                     
    T1                        T1_I3                          A                   2                      In which we can point out that there exist two indexes t1_i and t1_i2 starting with the same column A . This is why the index t1_i2 is not necessary and should not have been created originally.
    Hope this helps
    Mohamed Houri

  • Database very slow/hangs.

    Hello,
    My company server has a database of size 400GB. It has has been distributed on 2 HDD of 300GB each. Previously i had faced the problem of database performing very slow . That time the size of the database was much shorter & i just moved the REDOLOG FILE to other HDD using rename command. That worked & again the database was working great. But now the scenario is quite different than previous... as given below ...
    Tablespace Name - LHSERP
    No. Of. Datafiles in LHSERP Tablespace - 55  ( Which are distributed on 2 HDD )
    Hard Disk Drive (HDD) in server - 2
    Capacity of HDD - 300GB each.now both HDD has datafiles from LHSERP tablespace. I tried to move the redo log files to other HDD but no use. No improvement in the performance. Still the database is slow.
    One more thing i need to make a point of , when i see the task manager on the server it shows some RED and GREEN color graph of CPU usage. Does it mean anything serious ???? Even the whole OS works quite slow on the server. Right from opening MY COMPUTER , to loging into the user ....to fire the query in the user ...everything is very slow. Should i try to short out this problem in some different direction ???
    Can you suggest me what to do next ... to improve database performance. If you have anymore ideas please let me know.
    ORACLE DATABASE 10g
    Windows Server 2008 64-Bit
    Thanks in advance ....

    Hi,
    What is your AWR snapshot keep time? I normally set it to 30 days so that in case of some problem, i can compare my AWR with my history AWRs. Now, can you take out an AWR when your database was doing good and then the latest AWR and then compare it and see what is the difference? Is redo log generation has increased? What are the top 5 wait events in the good AWR report and now in current bad AWR report. What are the top SQLs (elapsed time, CPU time) in good and bad AWR. What were the top segments in bad and good AWRs.
    Doing this will give you insight of the problematic area.
    Can you check ADDM report, is oracle recommending you anything t o look into?
    what is the CPU usage. Even Oracle is the top consumer, can you check the CPU usage for past 24 hours, is it touching 100 %?
    You should have OEM configured with the database. Form OEM, can you check host hard disks performance and check the busy percentage of your hard disks for past one month and check if there was any increase in the hard disk busy rate.
    Doing all above will certainly help you identify the problematic area.
    Salman

  • Logging large .ctl to citadel database is slow

    Hi, All
    I'm using LV 8.5  with DSC.
    I'm logging my own.ctl types shared variables to citadel database. This own.ctl  includes 52 piece doubles, strings and boolean datatypes. I have 200 piece this tape of shared variables and I record all that shared variable information to citadel database. All operations as writing, reading, achieving, and deleting database are very slow. They also consume lot CPU time. Is there any capacity limits on citadel database ? Which is best structure to record this kind of data to database? I have tested (see attacment) structure but it seems to be very slow because I have to delete and create database trace references after 1st database is full.
    BR,
    Attachments:
    code_structure.jpg ‏51 KB

    When you say "...to Log Data only when necessary... " I assume you are using Set Tag Attribute.vi to establish this behavior.
    National Instruments recently added a feature (as a hot-fix) to the DSC Engine to be able to ignore timestamps coming from servers to prevent to log values with "back-in-time" timestamps. Citadel is really critical taking values back in time (logs a ) and therefore retrieval of Citadel data with such back-in-time traces could act wired.
    You can find more info from:
    Why Do I See a Lot of NaN (Not-a-Number) In My Citadel Database When I Use the Set Tag Attribute.vi?
    blic.nsf/websearch/B871D05A1A4742FA86256C70006BBE00?OpenDocument>How Do I Avoid Out-of-Synch (a.k.a....
    The Hot-Fix can be found:
    LabVIEW Datalogging and Supervisory Control Module Version 6.1 for Windows 2000/95/98/ME/NT/XP -- Fi...
    I assume you run into such a use case - maybe. It happen to me, too . And I've created a small VI which would analyze traces on back-in-time (NaN - Not a number) values. I assume the missing Data in DIAdem are those Not-a-Numbers aka Break.
    If you still encounter some problems after applying the DSCEngine.ini UseServerTimestamps=false, you might contact a National Instruments Support Engineer.
    Hope this helps
    Roland
    Attachments:
    BackInTimeAnalyzer.llb ‏622 KB

  • AWR - Database Performance Slow

    If my Whole Database Performance is slow,
    running AWR report include current time statistics when the DB Performance is slow ?

    The default AWR Snapshot Interval is 1 hour. So, if you have the default implementation, you will be able to create an AWR report for the period 10am to 11am. It will not reflect what or why "slowness" occurred at 10:45. The statistics in the AWR report will be a summation / averaging of all the activity in the entire hour.
    You could modify the Snapshot Interval (using dbms_workload_repository.modify_snapshot_settings) to have Oracle collect snapshots every 15minutes. But that will apply after the change has been made. So, if you have a slowness subsequently, you will be able to investigate it with the AWR report for that period. But what has been collected in the past at hourly intervals cannot be refined any further.
    Hemant K Chitale

  • Database Performance Slow

    Hi to all,
    My database performance is suddenly going slow. My PGA Cahe hit percentage remain in 96%.
    I will list out the findidngs I found...
    Some tables were not analyzed since Dec2007. Some tables were never analyzed.
    (Will the tables were analyzed the performance will be improved for this scenario)
    PGA Allocated is 400MB. But till now the max pga allocated is 95MB since Instance started (11 Nov 08 - Instance started date).
    (I persume we have Over allocated PGA can i reduce it to 200MB and increase the Shared pool and Buffer Cache 100MB each?)
    Memory Configuration:
    Buffer Cache: 504 MB
    Shared Pool: 600 MB
    Java Pool: 24MB
    Large Pool: 24MB
    SGA Max Size is: 1201.72 MB
    PGA Aggregate is: 400 MB
    My Database resided in Windows 2003 Server Standard Edition with 4GB of RAM.
    Please give me suggestions.
    Thanks and Regards,
    Vijayaraghavan K

    Vijayaraghavan Krishnan wrote:
    My database performance is suddenly going slow. My PGA Cahe hit percentage remain in 96%.
    Some tables were not analyzed since Dec2007. Some tables were never analyzed.
    PGA Allocated is 400MB. But till now the max pga allocated is 95MB since Instance started (11 Nov 08 - Instance started date).
    (I persume we have Over allocated PGA can i reduce it to 200MB and increase the Shared pool and Buffer Cache 100MB each?)
    You are in an awkward situtation - your database is behaving badly, but it has been in an unhealthy state for a very long time, and any "simple" change you make to address the performance could have unpredictable side effects.
    At this moment you have to think at two levels - tactical and strategic.
    Tactical - is there anything you can do in the short term to address the immediate problem.
    Strategic - what is the longer-term plan to sort out the state of the database.
    Strategically, you should be heading for a database with correct indexing, representative data statistics, optimium resource allocation, minimum hacking in the parameter file, and (probably) implementation of "system statistics".
    Tactically, you need to find out which queries (old or new) have suddenly introduced an extra work load, or whether there has been an increase in the number of end-users, or other tasks running on the machine.
    For a quick and dirty approach you could start by checking v$sql every few minutes for recent SQL that might be expensive; or run checks for SQL that has executed a very large number of times, or has used a lot of CPU, or has done a lot of disk I/O or buffer gets.
    You could also install statspack and start taking snapshots hourly at level 7, then run off reports covering intervals when the system is slow - again a quick check would be to look at the "SQL ordered by .." sections of the report to the expensive SQL.
    If you are lucky, there will be a few nasty SQL statements that you can identify as responsible for most of your resource usage - then you can decide what to do about them
    Regarding pga_aggregate_target: this is a value that is available for sharing across all processes; from the name you've used, I think you may be looking at a figure for a single specific process - so I wouldn't reduce the pga_aggregate_target just yet.
    If you want to post a statspack report to the forum, we may be able to make a few further suggestions. (Use the "code" tags - in curly brackets { } to make the report readable in a fixed fontRegards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    "The temptation to form premature theories upon insufficient data is the bane of our profession."
    Sherlock Holmes (Sir Arthur Conan Doyle) in "The Valley of Fear".

  • RMAN duplicate database suddenly slow

    Hi Everyone,
    I posted at wrong forum last time, sorry about that.
    I used RMAN to duplicate database to different box in the same local network area. Here are the scenario:
    boxA: target database (PROD) -- 250G
    database:Oracle 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    OS:Windows Server 2003 Enterprise x64 Edition
    boxB: cloned database
    database:Oracle 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    OS:Windows Server 2003 Enterprise x64 Edition
    After prepared necessary steps,listener and tnsnames,etc, I run the following scripts:
    C:\>rman target sys/[email protected] auxiliary sys/[email protected] catalog rman/rman
    RMAN>RUN
    SET NEWNAME FOR DATAFILE 1 TO 'D:\oracle\undb\system01.dbf';
    .... (some set newname omitted here)
    SET NEWNAME FOR DATAFILE 15 TO 'D:\oracle\undb\RMAN01.ORA';
    SET NEWNAME FOR TEMPFILE 1 TO 'D:\oracle\undb\TEMP01.ORA';
    # to manually allocate three auxiliary channels for disk issue
    ALLOCATE AUXILIARY CHANNEL aux1 DEVICE TYPE DISK;
    ALLOCATE AUXILIARY CHANNEL aux2 DEVICE TYPE DISK;
    ALLOCATE AUXILIARY CHANNEL aux3 DEVICE TYPE DISK;
    DUPLICATE TARGET DATABASE TO undb
    LOGFILE
    GROUP 1 ('D:\oracle\undb\redo01.log') SIZE 100M REUSE,
    GROUP 2 ('D:\oracle\undb\redo02.log') SIZE 100M REUSE,
    GROUP 3 ('D:\oracle\undb\redo03.log') SIZE 100M REUSE;
    database duplicated successfully last week.
    Monday, I created another database in boxA (testdb) and used the similiar scripts (just made necessary changed, database name,file names,etc), and the duplication also successful.
    Yesterday, I tried to duplicate PROD, did the exactly same thing as Monday,
    everything was very slow,
    RMAN>report schema; took over 30 minutes,
    RMAN>list backup;took another 30 minutes,
    then I delete testdb in boxA, nothing better.
    I have tried from boxA, also tried from boxB, things were similiar.
    (when test from boxB, I input: rman target sys/[email protected] auxiliary sys/[email protected] catalog rman/[email protected] )
    when I input:
    RMAN> report schema;
    and checked from Enterprise Manager,
    SELECT RECID , STAMP , THREAD# , SEQUENCE# , FIRST_CHANGE# LOW_SCN , FIRST_TIME LOW_TIME , NEXT_CHANGE# NEXT_SCN ,
    RESETLOGS_CHANGE# , RESETLOGS_TIME FROM V$LOG_HISTORY WHERE RECID BETWEEN :1 AND :1 AND RESETLOGS_TIME IS NOT NULL
    AND STAMP >= :1 ORDER BY RECID
    the query is running for a long time, my V$LOG_HISTORY has 18688 rows, select count(*) from V$LOG_HISTORY took less than 1 second.
    I am wondering what is the reason?
    Can anybody give me a clue?
    Thank you very much.

    Thanks Alan,
    Hardware spec:
    duplicate database server CPU: 2x1995 Mhz ,RAM:4G, hard disk 500G
    production server CPU: 2x2993 Mhz ,RAM:5G,hard disk 900G
    the production server is better than the duplicate server. the production only run Oracle 10g server, the duplicate server is brand new and nothing is runing except Windows and Oracle software only.
    I noticed that when in peek hour, RMAN is really slow, no matter which box I run the RMAN, when in non peek hour, it's reasonable.
    Is there anything I can do on the RMAN side?
    Thanks again

  • Keeping data in server compact database AFTER deployment

    Hey!
    I have a small problem regarding SQL Server Compact and Visual Studio 2010: My application uses a database, the purpose is to manage article stocks.
    My problem now is that I have an existing excel sheet with articles, which I want to put into the database. As of now I am  changing the code for the specific excel workSheets and the respective row indexes manually. To exemplify that: I have the workSheet[1]
    and need the content of rows 20 to 178. In my code, I am changing the workSheet index and adapt my for-loop to the right row indexes. Then I start the debugging and repeat the process for another set of rows.
    I could of course create input textboxes to enter rows and worksheet indexes to my project, so after deploying it, I can enter my desired values.
    However, out of curiousity I would like to know, if there is a way to keep data I read at debugging time in my database to use it after deployment.
    Regards
    pat3d3r

    Use a full path to your database file in your connection string, and data will persist. In addition, if you have the database file as a project item set it to Copy= Never.
    You also need to think about where you want to place the file after deployment, I have some ideas here: 
    http://erikej.blogspot.dk/2013/10/sql-server-compact-4-desktop-app-with.html
    Please mark as answer, if this was it. Visit my SQL Server Compact blog http://erikej.blogspot.com

  • Stand by database is slow

    Hi all,
    I am testing data guard at my laptop with winxp, i m using Oracle 9.2 version.
    Data guard is implemented and working fine.
    The problem i am facing is very slow response from standby database, it takes so much time while startup, change recovery mode and changing read only mode. Where as primary database is working fine.
    Any idea what could be the reason.
    Regards,
    Asim

    couple of days earlier i implement data guard on the same laptop, at that time primary and standby database was working fine no performance (slow) issues, then i need to reinstall windows, now when i once again configured data guard and face the stated (standby slow) problem.
    I have tested some hit and trials and finally stop all other database basis and started only standby in nomount mode but same problem.
    :-) I agree with your point, this is a not a normal behaviour.
    Further i think to solve this problem, i might reinstall windows, probably then this problem will be solved and some one new might arise.
    Regards,

  • Compact Flash  Reader slow

    Before buying the new mac pro (2ghz with2 gigs of ram) I had the G4 mirror door, 1.25 ghz with 1.25 ram. I use the Delkin compact flash card reader, firewire. Last night was the first time I used the computer to download images from my digital camera. I thought to myself, this is really going to be a lot faster than on the G4, wrong! it seemed to take just as long as before, I am using a variety of cards, one of them is the Kingston Ultimate, 4 gigs, 133X. I tried with the reader connected to the firewire port on both the back and front of the computer, no difference. Should this card reader work faster on the new computer? Other question. I don't know if there are card readers that are USB 2, if so, would they be faster?
    thanks

    I think Barefeats took a look and reviewed some CF cards and readers, I think focusing on the FW800. But if I recall correctly, the FW800 product has been removed.
    USB2 is going to be as slow or slower than FireWire.
    Things that are faster, cpu, memory, drives, and PCIe.

  • Database upgrade - slow query performance

    Hi,
    recently we have upgraded our 8i database to an 10g database.
    While we was testing our forms application against the new
    10g database there was a very slow sql-statements which runs
    several minutes but it runs against the 8i database within seconds.
    With sqlplus it runs sometimes fast, sometimes slow (see execution plan below)
    in 10g.
    The sql-statement in detail:
    SELECT name1, vornam, aboid, liefstat
    FROM aktuellerabosatz
    WHERE aboid = evitadba.get_evitaid ('0000002100')
    "aktuellerabosatz" is a view on a table with about 3.000.000 records.
    The function get_evitaid gets only the substring of the last 4 diggits of the whole
    number.
    execution plan with slow responce time:
    12:05:31 EVITADBA-TSUN>SELECT name1, vornam, aboid, liefstat
    12:05:35 2 FROM aktuellerabosatz
    12:05:35 3 WHERE aboid = evitadba.get_evitaid ('0000002100');
    NAME1 VORNAM ABOID L
    RETHMANN ENTSORGUNGSWIRTSCHAFT 2100 A
    1 Zeile wurde ausgewählt.
    Abgelaufen: 00:00:55.07
    Ausführungsplan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=4 Card=1 Bytes=38)
    1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EVTABO' (TABLE) (Cost=4
    Card=1 Bytes=38)
    2 1 INDEX (RANGE SCAN) OF 'EVIABO22' (INDEX (UNIQUE)) (Cost=
    3 Card=1)
    Statistiken
    100 recursive calls
    0 db block gets
    121353 consistent gets
    121285 physical reads
    0 redo size
    613 bytes sent via SQL*Net to client
    500 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed
    execution plan with fast response time:
    12:06:43 EVITADBA-TSUN>SELECT name1, vornam, aboid, liefstat
    12:06:58 2 FROM aktuellerabosatz
    12:06:58 3 WHERE aboid = evitadba.get_evitaid ('0000002100');
    NAME1 VORNAM ABOID L
    RETHMANN ENTSORGUNGSWIRTSCHAFT 2100 A
    1 Zeile wurde ausgewählt.
    Abgelaufen: 00:00:00.00
    Ausführungsplan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=4 Card=1 Bytes=38)
    1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EVTABO' (TABLE) (Cost=4
    Card=1 Bytes=38)
    2 1 INDEX (RANGE SCAN) OF 'EVIABO22' (INDEX (UNIQUE)) (Cost=
    3 Card=1)
    Statistiken
    110 recursive calls
    8 db block gets
    49 consistent gets
    0 physical reads
    0 redo size
    613 bytes sent via SQL*Net to client
    500 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed
    In the fast response the consistent gets and physical reads are very small
    but in the another time very high what (it seems) results in the slow performance.
    What could be the reasons?
    kind regards
    Marco

    The two execution plans above are both 10g-sqlsessions on the same database with the same user. We gather statistics for the database with the dbms_stats package. Normally we have the all_rows option. The confusing thing is sometimes the sql-statement runs fas sometimes slow in a sqlplus session with the same executin plan only the physical gets, constent reads are extreme different.
    If we rewrite the sql-statement to use the table evtabo with the an additional
    where clause (which is from the view) instead of using the view then it runs fast:
    14:24:04 H00ZRETH-TSUN>SELECT name1, vornam, aboid, liefstat
    14:24:14 2 FROM aktuellerabosatz
    14:24:14 3 WHERE aboid = evitadba.get_evitaid ('0000000246');
    Es wurden keine Zeilen ausgewählt
    Abgelaufen: 00:00:43.07
    Ausführungsplan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=27315 Card=1204986
    Bytes=59044314)
    1 0 VIEW OF 'EVTABO_V1' (VIEW) (Cost=27315 Card=1204986 Bytes=
    59044314)
    2 1 TABLE ACCESS (FULL) OF 'EVTABO' (TABLE) (Cost=27315 Card
    =1204986 Bytes=45789468)
    14:24:59 H00ZRETH-TSUN>SELECT name1, vornam, aboid, liefstat
    14:25:26 2 FROM evtabo
    14:25:26 3 WHERE aboid = evitadba.get_evitaid ('0000002100')
    14:25:26 4 and gueltab <= TRUNC(sysdate) AND (gueltbs >=TRUNC(SYSDATE) OR gueltbs IS NULL);
    NAME1 VORNAM ABOID L
    RETHMANN ENTSORGUNGSWIRTSCHAFT 2100 A
    1 Zeile wurde ausgewählt.
    Abgelaufen: 00:00:00.00
    Ausführungsplan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=4 Card=1 Bytes=38)
    1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EVTABO' (TABLE) (Cost=4
    Card=1 Bytes=38)
    2 1 INDEX (RANGE SCAN) OF 'EVIABO22' (INDEX (UNIQUE)) (Cost=
    3 Card=1)
    What could be the reason for the different performance in 8i and 10g?
    Thanks
    Marco

  • Database becomes slow maybe due to "SUCCESS: diskgroup ORAARCH was dismount

    Hello,
    I got performance complaint from client side like
    " when we are running the software GUI interface from our laptops we are noticing a problem when we add new users and do any type of device sort to the data base. The system slows down to a crawl and any other users on the system are unable to do any tasks. "
    I checked database alert log, got
    SUCCESS: diskgroup ORAARCH was mounted
    SUCCESS: diskgroup ORAARCH was dismounted
    SUCCESS: diskgroup ORAARCH was mounted
    SUCCESS: diskgroup ORAARCH was dismounted
    SUCCESS: diskgroup ORAARCH was mounted
    SUCCESS: diskgroup ORAARCH was dismounted
    Is the reason? any solution?
    thel database is 10.2.0.4.0 on Linux
    thank you
    Edited by: ROY123 on Feb 23, 2012 1:09 PM

    ROY123 wrote:
    Hello,
    I got performance complaint from client side like
    " when we are running the software GUI interface from our laptops we are noticing a problem when we add new users and do any type of device sort to the data base. The system slows down to a crawl and any other users on the system are unable to do any tasks. "
    I checked database alert log, got
    SUCCESS: diskgroup ORAARCH was mounted
    SUCCESS: diskgroup ORAARCH was dismounted
    SUCCESS: diskgroup ORAARCH was mounted
    SUCCESS: diskgroup ORAARCH was dismounted
    SUCCESS: diskgroup ORAARCH was mounted
    SUCCESS: diskgroup ORAARCH was dismounted
    Is the reason? any solution?
    thel database is 10.2.0.4.0 on Linux
    This is just an indicator how many times diskgroup was mounted and unmounted. Mounted and dismount events will happen whenever any requests are sent to diskgroup and sent back to database. So we can say when nothing will happen from database side i.e. database don't need IO, diskgroup will be ideal and will show its status as dismounted in alert.log. So having said this, its not a performance problem for you.
    Rather you should be checking at AWR and OS stats to look for performance problem.
    Hope this help

  • Importing photos from camera or compact flash so slow urgent help.

    Little bit gutted, As this happen on my old MBP  now i have a new one got the same proble,
    on the 25th took part in the canon marathon, taking near 1000 photos raw on canon 7d.
    I took so long, i could just get some of the photo to meat the dead line, but Mac my mac had trouble getting onto the net work . when it did was dead slow agains all other windows lap tops.
    i went home to use my internet and again I could not up load to there entry form. only 2 MB J pegs.
    I contunued to down load. Although I did not get to enter to their law. i phoned the agent and he said e mail So much trouble e mailing the photos. I took it to apple mac bornoe the next day, They could not get the e mail to send by atatchment .
    The importing from my copact flsh , in the end took 9 hours,
    yesturday photographed a wedding , i said to my wife this will take hours to tranfure to my mac, I had a 32 GB and two 16 GB
    She said use hers, I sad No, this macs pwerful. Gutted it said and took 3 hours to down load 11 gb of photo info.
    She took the same card and it only took less then 10 min.
    Only differance . she is using window 2 GB ram 2.3 GHZ 4 core .
    agains my Mac book pro with 8 GB ram 2.2 4 cor :
    Model Name:          MacBook Pro
      Model Identifier:          MacBookPro8,2
      Processor Name:          Intel Core i7
      Processor Speed:          2.2 GHz
      Number of Processors:          1
      Total Number of Cores:          4
      L2 Cache (per Core):          256 KB
      L3 Cache:          6 MB
      Memory:          8 GB
      Boot ROM Version:          MBP81.0047.B27
      SMC Version (system):          1.69f3
      Serial Number (system):          C0******V7N
      Hardware UUID:        
      Sudden Motion Sensor:
      State:          Enabled
    I have 134 GB of free space .       I tried canon program driect drag and aperture. Please help as this is emarresssing and not good publicity for mac. and my self as a photographer. Im being laughed at.

    Little bit gutted, As this happen on my old MBP  now i have a new one got the same proble,
    on the 25th took part in the canon marathon, taking near 1000 photos raw on canon 7d.
    I took so long, i could just get some of the photo to meat the dead line, but Mac my mac had trouble getting onto the net work . when it did was dead slow agains all other windows lap tops.
    i went home to use my internet and again I could not up load to there entry form. only 2 MB J pegs.
    I contunued to down load. Although I did not get to enter to their law. i phoned the agent and he said e mail So much trouble e mailing the photos. I took it to apple mac bornoe the next day, They could not get the e mail to send by atatchment .
    The importing from my copact flsh , in the end took 9 hours,
    yesturday photographed a wedding , i said to my wife this will take hours to tranfure to my mac, I had a 32 GB and two 16 GB
    She said use hers, I sad No, this macs pwerful. Gutted it said and took 3 hours to down load 11 gb of photo info.
    She took the same card and it only took less then 10 min.
    Only differance . she is using window 2 GB ram 2.3 GHZ 4 core .
    agains my Mac book pro with 8 GB ram 2.2 4 cor :
    Model Name:          MacBook Pro
      Model Identifier:          MacBookPro8,2
      Processor Name:          Intel Core i7
      Processor Speed:          2.2 GHz
      Number of Processors:          1
      Total Number of Cores:          4
      L2 Cache (per Core):          256 KB
      L3 Cache:          6 MB
      Memory:          8 GB
      Boot ROM Version:          MBP81.0047.B27
      SMC Version (system):          1.69f3
      Serial Number (system):          C0******V7N
      Hardware UUID:        
      Sudden Motion Sensor:
      State:          Enabled
    I have 134 GB of free space .       I tried canon program driect drag and aperture. Please help as this is emarresssing and not good publicity for mac. and my self as a photographer. Im being laughed at.

  • Database is slow

    Hi,
    we have migrated database from single instance to 2 node RAC recently and since then we have been observing degardation of perfromance.The mostly observed wait events on the database are buffer busy waits and db file sequential read.another observation is that library cache miss rate always hangs above 60%.
    It'a documentum application.
    The application is using only two tablespaces one for data and another for indexes.Kindly suggest the way in whch i can boost up the perfromance.
    vamsi

    Hello,
    Well moving from single install to RAC is more then just high availability. You need to do some tuning, and I mean serious tuning.
    The are huge amounts of resources out on the web about this. In short here is what you are facing:
    "The main way to reduce buffer busy waits is to reduce the total I/O on the system. This can be done by tuning the SQL to access rows with fewer block reads (i.e., by adding indexes). Even if we have a huge db_cache_size, we may still see buffer busy waits, and increasing the buffer size won't help.
    The resolution of a "buffer busy wait" events is one of the most confounding problems with Oracle. In an I/O-bound Oracle system, buffer busy waits are common, as evidenced by any system with read (sequential/scattered) waits in the top-five waits."
    Hope this helps you get on your way. Check in the database forums for more help, or just have your local DBA tune the database. If he you are running enterprise edition then you should have access to the performance tools, including SQL Tuning advisor, Segment advisor.
    When you generate an snapshot report, check if you don't have any ITL waits, see what segments and block the database is hot for. see what SQL statement are hot(meaning how many times its been executed, and how many buffer is reads every time).
    Hope this helps.
    Jan

Maybe you are looking for

  • Printer Setup for HTML reports (Landscape)

    I'm trying to print an HTML report in Landscape, the Report Orientation VI only works with Standard Reports, and I can't find anything that will let me modify the "Printer Setup" Anyone done this?

  • N:1 File to File Scenario..

    Hi Experts, I am trying to do N:1 file to file scenario. Input source directory contains 5 files and each file consists of many of records and i want all the entries in input files need to be transformed to single ouput file. Example: Input files Fil

  • Lightroom 4 Export sizes not accurate? Print Space website london

    Hi Guys, I'm currently exporting my images from lightroom 4 to get them printed at the print space in London i exported 3 files at A4 size in mm, i then uploaded them onto the print spaces website and all 3 images are very slightly off in dimensions

  • Oracle 11g on Windows XP

    Hello All, I recently downloaded a copy of Oracle 11g for windows and installed it on my windows xp laptop. I was able to install it without issues but I am unable to create a new database! A little digging around revealed that there are very less no

  • Lightroom Adobe Revel Smart Collections

    Hi There, when i first tried the export function from LR to Revel i was totally thrilled because most of the time i work with smart collections. i just upoaded them to Revel from LR. From time to time some of the photos belong to more of one smart gr