Database performance side effects of huge inserts
All,
I know this might not be the right place to raise this question. Just thought any of you might have come across this in db programming.
I have a table in database in which there would be huge no. of insertions through out the day. The volume of insertions per day would be around 20000 rows. And there is no process other than this which would be querying this table.
My question is whether so many INSERTs on one table would affect in any way the performance of the whole database ?
Would this affect the response time of the processes working with other tables in the same database ?
My question is whether so many INSERTs on one tablewould affect in any way the performance of the whole
database ?
Would this affect the response time of the processesworking with other tables in the same database ?
Yeah since it's running on the same PC.
The volume of insertions per day would be around20000 rows. And there is
no process other than this which would be queryingthis table.
But 20000 rows is a very small insert number for
database, so you needn't to consider too much about
performance. If you wish to insert quicker, you
should use PreparedStatement to insert your rows, or
use batch insert.
One exception is that you'r inserting 20000 rows into
a very big table with many index expressions on that
table or imported/exported keys on other big tables.
That will be slower.Well this table wont have any foreign keys on any big table...
though the table size would be huge... would gross upto 1 gb data in a month. I am not concerned about the performance of queries on this table at any stage.. but would continuous insertions... 10 INSRTS a sec in this table affect the performance of the rest of the tables in the db in any way
Similar Messages
-
Db2bak side effects on the database
Dear All,
I would like to know what is the impact / side effect of db2bak on the database?
Does it lock the database while running?
Moreover, if you can provide me with more ideas/ info about any other side effects would do me great.
Regards,
ScottyI am running a stored procedure that uses getXML. I would like to know how much memory its using up. I was told to increase the java pool, but it did not help with large query of getXML. What should I monitor for getXML?
Thanks. -
Can Performance be effect by setting db_file_simultaneous_writes to high
Today we set the parameter db_file_simultaneous_writes to high from default to 32.
db_writer_processes = 4
db_block_lru_latches = 8
disk_async_io=TRUE (we use rawdevices)
number cpu's = 8
The disc util is 100% on the index area. There are 14 process inserting into one table 7 milj records a day. We see that the database writers are the 4 top disk io users.
I was thinking to set the db_writer_processes to 8, so perhaps the contention on the database writers is not so heavy.
Please your thought and stuff i can use to analyze this further.
ThanksOn Page 5, second paragraph under "SEQUENTIAL IO IS
TREATED SPECIALLY BY ORACLE" section of
<a
href="http://www.oracle.com/technology/deploy/availabi
lity/pdf/oow2000_same.pdf"> OPTIMAL STORAGE
CONFIGURATION MADE EASY
</a> document says:
"To achieve this, parameters such as
db_file_multiblock_read_count
should be set to one megabyte, stripe widths should
be set to one megabyte,
and OS IO size limits should be set to at least one
megabyte."
You usually need to check the date and context of articles like this.
Unfortunately this article isn't dated, but you get some clues from the figures it gives for disk latency and transfer rates, and there is a reference to Oracle 8.1 - I think this article is around 10 years old.
As far as context - the point you quote comes from a section that suggests that it applies if you need to make your large reads efficient. In other words, you are expecting, or want, to see a lot of tablescan activity - which would be consistent with the type of system where you would want to set the db_file_mulitblock_read_count high.
You're point, though, is correct - if you set a large value for the db_file_multiblock_read_count then you encourage the (older) cost based optimizer towards tablescans and index fast full scans. In 8i you counterbalance this by carefully reducing the optimizer_index_cost_adj; from 9i onwards you address the side-effects by enabling system statistics (aka CPU costing), and in 10g you don't set the db_file_multiblock_read_count at all - but let Oracle decide on values for the "optimizer read count" and the "run time read count".
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk -
Hi to all,
My database performance is suddenly going slow. My PGA Cahe hit percentage remain in 96%.
I will list out the findidngs I found...
Some tables were not analyzed since Dec2007. Some tables were never analyzed.
(Will the tables were analyzed the performance will be improved for this scenario)
PGA Allocated is 400MB. But till now the max pga allocated is 95MB since Instance started (11 Nov 08 - Instance started date).
(I persume we have Over allocated PGA can i reduce it to 200MB and increase the Shared pool and Buffer Cache 100MB each?)
Memory Configuration:
Buffer Cache: 504 MB
Shared Pool: 600 MB
Java Pool: 24MB
Large Pool: 24MB
SGA Max Size is: 1201.72 MB
PGA Aggregate is: 400 MB
My Database resided in Windows 2003 Server Standard Edition with 4GB of RAM.
Please give me suggestions.
Thanks and Regards,
Vijayaraghavan KVijayaraghavan Krishnan wrote:
My database performance is suddenly going slow. My PGA Cahe hit percentage remain in 96%.
Some tables were not analyzed since Dec2007. Some tables were never analyzed.
PGA Allocated is 400MB. But till now the max pga allocated is 95MB since Instance started (11 Nov 08 - Instance started date).
(I persume we have Over allocated PGA can i reduce it to 200MB and increase the Shared pool and Buffer Cache 100MB each?)
You are in an awkward situtation - your database is behaving badly, but it has been in an unhealthy state for a very long time, and any "simple" change you make to address the performance could have unpredictable side effects.
At this moment you have to think at two levels - tactical and strategic.
Tactical - is there anything you can do in the short term to address the immediate problem.
Strategic - what is the longer-term plan to sort out the state of the database.
Strategically, you should be heading for a database with correct indexing, representative data statistics, optimium resource allocation, minimum hacking in the parameter file, and (probably) implementation of "system statistics".
Tactically, you need to find out which queries (old or new) have suddenly introduced an extra work load, or whether there has been an increase in the number of end-users, or other tasks running on the machine.
For a quick and dirty approach you could start by checking v$sql every few minutes for recent SQL that might be expensive; or run checks for SQL that has executed a very large number of times, or has used a lot of CPU, or has done a lot of disk I/O or buffer gets.
You could also install statspack and start taking snapshots hourly at level 7, then run off reports covering intervals when the system is slow - again a quick check would be to look at the "SQL ordered by .." sections of the report to the expensive SQL.
If you are lucky, there will be a few nasty SQL statements that you can identify as responsible for most of your resource usage - then you can decide what to do about them
Regarding pga_aggregate_target: this is a value that is available for sharing across all processes; from the name you've used, I think you may be looking at a figure for a single specific process - so I wouldn't reduce the pga_aggregate_target just yet.
If you want to post a statspack report to the forum, we may be able to make a few further suggestions. (Use the "code" tags - in curly brackets { } to make the report readable in a fixed fontRegards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
"The temptation to form premature theories upon insufficient data is the bane of our profession."
Sherlock Holmes (Sir Arthur Conan Doyle) in "The Valley of Fear". -
One of my friends working on mssql told me that recreating of procedures could improve the database performance. If you have created the procedure and recreated it after insertion of huge volume of data.
Is it true. If so how?I think so too but he said that in his microsoft certified course told that it could improve the performance.
He told me that the select statements prevously used uses the old statistic but after recreating it uses the new statistic. -
Hi all,
Does anyone know what are the side effects with support package 18 for BI 7.00
Thanks,
Joseph MHi Joseph,
Please find the list of Side effects that might come during SP upgrade:
Transaction BUP3 opens BP in change mode on first access
PPOM: search function dumps
HRALXSYNC: No repair if BP-integration is partly active
Generation of testplan, testcase, project - long runtime
Unable to logon to due to updating of INDX table
ICF: ICF buffer filled because of special suffixes
Collective corrections: Logon 2/2007
SU01: Password change dialog box and 'Logon data' tab page
SU01: Password change dialog box and 'Logon data' tab page
Downports: CUA change docs, archiving, 12 hour time format
'Group' Titles are not being displayed.
ALV form : TOP-OF-LIST on TOP-OF-PAGE with page numbers
ALV mean value: mean value calculated incorrectly
ALV Grid: Selection column is no longer displayed
Message logs displayed after sorting
Minor performance improvement in PDF generation
Object pool: Persistent objects are not saved
OutputDevice name is getting chopped in oac0
Files not being deleted from the filesystem after archiving
MS Word as Editor in SAPscript and Smart Forms
MS Word as Editor in SAPscript and Smart Forms
MS Word as Editor in SAPscript and Smart Forms
MS Word as Editor in SAPscript and Smart Forms
RTF download from Unicode systems
RTF download from Unicode systems
Tax amount ignored in transaction total sales
BW 0RENTOBJECT_ATTR, occupancy cost center missing
BW master data, time-independent characteristics are deleted
BW master data, time-independent characteristics are deleted
Termination CELL_FUELLEN_FEMZ-02 during query generation
Field symbol not assigned in CL_RSDD_STATOLAP
BIA: Master data reorganization for Y tables is not adjusted
BIA: Master data reorganization for Y tables is not adjusted
BIA shadow index: Enhancement of analysis option
Inaccuracies in OLAP cache
Inaccuracies in OLAP cache
Dump TYPELOAD_LOS with insert in /BI0/06* tables
Displaying SQL and EXPLAIN in query statistics
BIA index incorrect after cancelation request
F4 Hierarchy variable ignores version and date restrictions
i_objvers: RSD_IOBJNM_GET_FROM_INFOSET
MDX: Too many values for NON EMPTY and WITH SET
Termination RTIME_APPEND-02- in program SAPLRRS2
X299 Brain in CL_RSDRC_MULTIPROV; form GET_PART_IOBJNM-01-
Buffering the MultiProvider runtime object
Buffering the MultiProvider runtime object
Buffering the MultiProvider runtime object
Buffering the MultiProvider runtime object
Dynamic DATA table during reading of data
Compounding and text variable, dynamic filter
READMODE initial leads to READMODE = A for MultiProviders
Releasing memory OLAP_CACHE
Termination SIDS_DIVIDE in SAPLRRSI and hierarchies
Performance improvement during analysis authorizations
Text variable with replacement path and exception cells
Formula variable not replaced (hierarchy deactivated)
IP: Optimizations for writable InfoProviders
Termination DMMAN 13; reading of delta buffer improved
Termination DMMAN 13; reading of delta buffer improved
The OLAP tunnel
Planning functions: Distribution with keys
DB6: Filling the aggregate in blocks with MDC
DB6: Filling the aggregate in blocks with MDC
DB6: Improve performance of data load
RSD_IOBJ_CMP_GET: Compounded navigation attributes
RSD_IOBJ_CMP_GET: Compounded navigation attributes
P18:DSO:Dump if you activate too many requests together
P17:DSO:Postprocessing ODS - activating and updating
Connection of MultiProvider validation in RSDMPROM
Post office bank current acct number not checked correctly
Unclear message for creation of bank details
Runtime batch selection
DYNPRO_MSG_IN_HELP runtime error with F1 help for a char
Change documents for AccessControlList
Documentation changes for FiMa
Incorrect read access for immediate repayment settlement
Loading the runtime repositories with inactive plug-ins
Loading the runtime repositories with inactive plug-ins
Loading the runtime repositories with inactive plug-ins
Variant: Changing sequence of selected fields
Correction of Note 1099260
table_illegal_statement in base_api_object_syn
BP: TaxNumber: Duplicate check for VAT Registration Number
Dump error when opening a corrupted email from Inbox
"required" attribute(input field) does not work for HE Lang.
Closing of popup(duplicate person) not handled properly.
PCUI : Improving performance for relations fetch.
R3AD_* stop entry in SMQ1 ERP after start initial/req loads
error in displaying adobe forms in portal through preview
BP_XDT: No creation possible of BP who is customer
MS_WORD_OLE_FORMLETTER: Wrong spec. chrctrs in file download
RHBEGDA0: No longer possible to shorten objects
PPPM: Termination on Individual Dev tab page when saving
Technical preparations for enhancement package
Document Flow - Object Pool usage control
Regards
Gattu -
Database performance is very slow
Hi DBA's
Plz help me out !!!
Application users complaining database performance is very slow. Its an 10g DB in IBM AIx Server.
Any changes needed pls be post as soon as possible
Buffer Cache Hit Ratio 94.69
Chained Row Ratio 0
Database CPU Time Ratio 17.21
Database Wait Time Ratio 82.78
Dictionary Cache Hit Ratio 99.38
Execute Parse Ratio -25.6
Get Hit Ratio 70.62
Latch Hit Ratio 99.65
Library Cache Hit Ratio 99.43
Parse CPU to Elapsed Ratio 8.4
Pin Hit Ratio 81.6
Soft-Parse Ratio 94.29
=====================================
NAME TYPE VALUE
cursor_sharing string EXACT
cursor_space_for_time boolean FALSE
nls_currency string
nls_dual_currency string
nls_iso_currency string
open_cursors integer 600
optimizer_secure_view_merging boolean TRUE
session_cached_cursors integer 20
sql92_security boolean FALSE
===========================================================
lock_sga boolean FALSE
pre_page_sga boolean FALSE
sga_max_size big integer 4272M
sga_target big integer 4G
pga_aggregate_target big integer 2980M
Total Ram Size is 8 GBSQL> select username,sid from v$session where username='WPCPRODUSR';
USERNAME SID
WPCPRODUSR 378
WPCPRODUSR 379
WPCPRODUSR 380
WPCPRODUSR 381
WPCPRODUSR 382
WPCPRODUSR 383
WPCPRODUSR 384
WPCPRODUSR 385
WPCPRODUSR 386
WPCPRODUSR 387
WPCPRODUSR 388
USERNAME SID
WPCPRODUSR 389
WPCPRODUSR 390
WPCPRODUSR 391
WPCPRODUSR 392
WPCPRODUSR 393
WPCPRODUSR 394
WPCPRODUSR 395
WPCPRODUSR 396
WPCPRODUSR 397
WPCPRODUSR 398
WPCPRODUSR 399
USERNAME SID
WPCPRODUSR 400
WPCPRODUSR 401
WPCPRODUSR 402
WPCPRODUSR 403
WPCPRODUSR 404
WPCPRODUSR 405
WPCPRODUSR 406
WPCPRODUSR 407
WPCPRODUSR 408
WPCPRODUSR 409
WPCPRODUSR 410
USERNAME SID
WPCPRODUSR 411
WPCPRODUSR 412
WPCPRODUSR 413
WPCPRODUSR 414
WPCPRODUSR 415
WPCPRODUSR 416
WPCPRODUSR 417
WPCPRODUSR 418
WPCPRODUSR 419
WPCPRODUSR 420
WPCPRODUSR 421
USERNAME SID
WPCPRODUSR 422
WPCPRODUSR 423
WPCPRODUSR 424
WPCPRODUSR 425
WPCPRODUSR 426
WPCPRODUSR 427
WPCPRODUSR 428
WPCPRODUSR 429
WPCPRODUSR 430
WPCPRODUSR 431
WPCPRODUSR 432
USERNAME SID
WPCPRODUSR 433
WPCPRODUSR 434
WPCPRODUSR 435
WPCPRODUSR 436
WPCPRODUSR 437
WPCPRODUSR 438
WPCPRODUSR 439
WPCPRODUSR 440
WPCPRODUSR 441
WPCPRODUSR 442
WPCPRODUSR 443
USERNAME SID
WPCPRODUSR 444
WPCPRODUSR 445
WPCPRODUSR 446
WPCPRODUSR 447
WPCPRODUSR 448
WPCPRODUSR 449
WPCPRODUSR 450
WPCPRODUSR 451
WPCPRODUSR 452
WPCPRODUSR 453
WPCPRODUSR 454
USERNAME SID
WPCPRODUSR 455
WPCPRODUSR 456
WPCPRODUSR 457
WPCPRODUSR 458
WPCPRODUSR 459
WPCPRODUSR 460
WPCPRODUSR 461
WPCPRODUSR 462
WPCPRODUSR 463
WPCPRODUSR 464
WPCPRODUSR 465
USERNAME SID
WPCPRODUSR 466
WPCPRODUSR 467
WPCPRODUSR 468
WPCPRODUSR 469
WPCPRODUSR 470
WPCPRODUSR 471
WPCPRODUSR 472
WPCPRODUSR 473
WPCPRODUSR 474
WPCPRODUSR 475
WPCPRODUSR 476
USERNAME SID
WPCPRODUSR 477
WPCPRODUSR 478
WPCPRODUSR 479
WPCPRODUSR 480
WPCPRODUSR 481
WPCPRODUSR 482
WPCPRODUSR 483
WPCPRODUSR 484
WPCPRODUSR 485
WPCPRODUSR 486
WPCPRODUSR 487
USERNAME SID
WPCPRODUSR 488
WPCPRODUSR 489
WPCPRODUSR 490
WPCPRODUSR 491
WPCPRODUSR 492
WPCPRODUSR 493
WPCPRODUSR 494
WPCPRODUSR 495
WPCPRODUSR 496
WPCPRODUSR 497
WPCPRODUSR 498
USERNAME SID
WPCPRODUSR 499
WPCPRODUSR 500
WPCPRODUSR 501
WPCPRODUSR 502
WPCPRODUSR 503
WPCPRODUSR 504
WPCPRODUSR 505
WPCPRODUSR 506
WPCPRODUSR 507
WPCPRODUSR 508
WPCPRODUSR 509
USERNAME SID
WPCPRODUSR 510
WPCPRODUSR 511
WPCPRODUSR 512
WPCPRODUSR 513
WPCPRODUSR 514
WPCPRODUSR 515
WPCPRODUSR 516
WPCPRODUSR 517
WPCPRODUSR 518
WPCPRODUSR 519
WPCPRODUSR 520
USERNAME SID
WPCPRODUSR 521
WPCPRODUSR 522
WPCPRODUSR 523
WPCPRODUSR 524
WPCPRODUSR 525
148 rows selected. -
i have a very large database consisting of around 300,000,000 records with 38 columns in each record.
i need help in tuning the database so that query results don't take more than a few seconds.
the data is the log of events occuring in the SMSC server.
there are about 10,000,000 events daily.
Search criteria is usually time range and the mobile numbers.
Please advise on the sql and table structure to be used for best performance.presently using composite partitioning with range partitioning based on time( one partition for each day). query with 1 day range takes about 70-80 seconds.
i'm using java servlets and jdbc for accessing the database.
there's no problem in inserting data.Hi
Thanks for your replies.
The table structure is as follows :
LOGGING_TIME DATE
LOG_TYPE NUMBER
SUC_INDICATOR NUMBER
ORIG_IW_TYPE NUMBER
ORIG_TYPE NUMBER
ORIG_ADDR VARCHAR2(21)
ORIG_ADDR_LEN NUMBER
DEST_IW_TYPE NUMBER
DEST_TYPE NUMBER
DEST_ADDR VARCHAR2(21)
DEST_ADDR_LEN NUMBER
SMS_CENTRE VARCHAR2(21)
INCOMING_TIME DATE
TIME DATE
ERROR_CAUSE NUMBER
ERROR_ORIGINATOR NUMBER
NO_OF_ATTEMPTS NUMBER
TARIFF_CLASS NUMBER
MSG_LEN NUMBER
PID NUMBER
SR_REQUEST NUMBER
DEFERRED_DEL NUMBER
SERV_DESC NUMBER
REF_NR NUMBER
MAX_NR NUMBER
SEQ_NR NUMBER
SP_MSG_IND NUMBER
DCS NUMBER
ACCESS_METHOD NUMBER
PRIORITY NUMBER
SENDER_CHG_TYPE NUMBER
RECIPIENT_CHG_TYPE NUMBER
SENDER_PREPAID_STATUS NUMBER
RECIPIENT_PREPAID_STATUS NUMBER
CHARGED_PARTY NUMBER
VMSC VARCHAR2(21)
ORIG_IMSI VARCHAR2(21)
CONSO_MSG VARCHAR2(10)
I have used composite partitioning with range partition based on logging_time and one day for each partition.there are 32 subpartitions.
I have created a local partitioned index on columns logging_time,orig_addr and dest_addr respectively as they are the most commonly used in queries.
query : select /*+ parallel_index(event_logs, event_logs_ind2)*/ * from event_logs where (logging_time>=?) and (logging_time<?) and (orig_addr like ?) and (log_type like ?) and (dest_addr like ?) order by logging_time
execution plan is as follows:
select statement
partition range(iterator)
sort(order by)
partition hash(all)
table access(by local index rowid) of "smsevent.event_logs"
index(range scan) of "smsevent.event_logs_ind"(non-unique)
I tried to make this more readable but the white spaces are getting removed from the posted message.
looking forward to a quick response -
Database Performance Monitoring
Hi,
I use oracle 11.2.0.2.0,IBM AIX 6.1 operating System.
My client/User complainting that the Application process is taking long time than usual,especially when they implementing some module in their applications.
So when i closely monitoring my production(LIVE) database at the time of implementation,im unable to find any issues in DB side.So what are all the possibile areas to be focus on this situation?
I really thinks it could also possible that the issue belongs to the Network failure/bandwith running slow.
So what i really expect is ,Are they any monitoring tool or any trigger applicable/available for this scenario?
Looking for Helpful Answers..
Regards
Faizfor information ,Here i post my actuall scenario
Only in two out of 200 client branches ,the application performance was taking long time than usual,
So that i enable trace (TKPROOF)for that corresponding sessions,also i generate and analyze AWR report durig that particular time,
I found that no issues from database side.Later i come to know the actual issue was being in the Network side(i.e Network speed was very poor).
So henceforth,i have been asked that if the problem persists again,i need to make ensure that the problem which not belongs to Network part before go and check
database performance.
So any tool or monitoring script or any packages available to make ensure that the actuall problem not belongs to Network related issues,befor check DB performance. -
Cursor_sharing Side-effects
I have a situation where several off-the-shelf applications (same vendor) are running on the same instance. One of the is performing poorly and I was able to get a good performance boost with cursor_sharing=force. After much testing in QA, we are ready to move it to production.
Now two of the other applications are having trouble. Apparently, the applications do some very basic selects from sqlplus and then parse the results. Setting cursor_sharing to force has had a side-effect (bug) that changes the column widths of these selects. The end result is that the other application fails because it can't parse it correctly.
This is a documented problem and Oracle recommends to always explicitly set your column widths in sqlplus. This is what we want to do, but the effort is not small.
A kludge work-around is to alter cursor_sharing before and after the batch processes. This can be done at either the system or session level.
My question is this: Is there a simple way to set it up so an that when this black-box application create a session, it will set the cursor_sharing to force?
Thanks,
Scott
http://www.erpfuture.comI have a situation where several off-the-shelf
applications (same vendor) are running on the same
instance. One of the is performing poorly and I was
able to get a good performance boost with
cursor_sharing=force. After much testing in QA, we
are ready to move it to production.
Now two of the other applications are having trouble.
Apparently, the applications do some very basic
c selects from sqlplus and then parse the results.
Setting cursor_sharing to force has had a
a side-effect (bug) that changes the column widths of
these selects. Actually it is the off the shelf applications that have the bug, they are not using bind variables which means you are overparsing and fragmenting your shared pool. Poor performance is about the best you will get from such applications. You are also likely open to security issues that arise from [url=http://www.google.com/search?q=sql+injection
]sql injection.
Cursor sharing force is a workaround for a badly written application. It auto binds all literals. This means plans will change and all literal values are variables which could contain anything which leads to the problem you describe.
select 'test' from dual;becomes
select :b_sys_0 from dual;where :b_sys_0 could be 4000 characters long.
I would second kamathg's advice that if you need to use the cursor sharing workaround to only set it at the session level for the application that needs it using the logon trigger.
You should do this as an interim measure while you file a bug report with the software vendor to have them fix their application.
The security issues do not go away. -
Side effects of using catmeta.sql?
Not exactly sure where to put this, but I've been having problems exporting, mostly around the XMLGEN part. It seems that the solution would be to rebuild all the views and whatnot, but since this is a live database, I'd like to know if there are any side effects accompanying using catmeta.sql or any of the related files.
Thanks in advance!Running Oracle9i Enterprise Edition Release 9.2.0.6.0 - Production under Windows XP. The error comes from exporting and comes up as
Connected to: Oracle9i Enterprise Edition Release 9.2.0.6.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.6.0 - Production
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
About to export specified users ...
. exporting pre-schema procedural objects and actions
. exporting foreign function library names for user TEST
. exporting PUBLIC type synonyms
. exporting private type synonyms
. exporting object type definitions for user TEST
About to export TEST's objects ...
. exporting database links
. exporting sequence numbers
. exporting cluster definitions
EXP-00056: ORACLE error 19206 encountered
ORA-19206: Invalid value for query or REF CURSOR parameter
ORA-06512: at "SYS.DBMS_XMLGEN", line 83
ORA-06512: at "SYS.DBMS_METADATA", line 345
ORA-06512: at "SYS.DBMS_METADATA", line 410
ORA-06512: at "SYS.DBMS_METADATA", line 449
ORA-06512: at "SYS.DBMS_METADATA", line 1156
ORA-06512: at "SYS.DBMS_METADATA", line 1141
ORA-06512: at line 1
EXP-00000: Export terminated unsuccessfully -
Side effects of the /$sync transaction
Hi!
Does anyone know is there any side effects of using the transaction code /$sync, which is cleaning up all buffers?
Personally I'm using it to refresh an ALV table, if it's structure was modified.
But it is applied only for my session's buffers and it will not harm other people and will not kick out other people from the SAP system.
Am I right?
Thank you
TamáIt will not kick out other people, but it does clear ALL buffers, and not only yours. Therefore you should not do it yourself, but rather ask some people from basis if you can and may do it or let them do it.
help.sap.com:
command $SYNC to reset all the SAP buffers on the application server. These commands only affect the buffers of the application server on which the commands are entered. The buffers of the other application servers in the network are not affected.
Using the commands $TAB and $SYNC places an extremely large load on the system. In large systems, it could take up to one hour (depending on the access profile) for the buffer load to return to its original state. System performance is greatly impeded during this time.
Edited by: Micky Oestreich on Apr 18, 2008 10:39 AM -
Can archive log backup influence database performance?
Hi,
can archive log backup generally influence the database performance? I mean: users can view their query to go slowly during backup of archived redolog?Are you asking about backing up the archived redo logs via rman or directly to tape or the actual archive process where Oracle backs the online redo to disk?
-- comments on archive process
Normally the redo log archiving process should have no noticable effect on database performance. About the only way for the process to have a noticable performance impact while it is running is if you store all your online redo logs on the same physical disk. You would also want the backup to be on a different physical disk.
Check your alert log to make sure you do not have error messages related to being unable to switch redo logs and checkpoint incomplete messages. These would be an indication that your online redo logs are defined too small and you are trying to cycle around before Oracle has finished archiving the older logs.
-- comments on archived redo log backup
Archived reodo logs should not be on the same disk as the database so using rman or an OS task to back these files up should not impact Oracle unless you server is itself near capacity and any additional task effects the server.
HTH -- Mark D Powell -- -
Side effects of "_system_trig_enabled"
Hi! I'm having trouble on setting the parameter "_system_trig_enabled".
Actually, I just migrate our Oracle 8i Database to Oracle 9i, and since I've enabled this parameter, some forms applications have presented side effects.
One of these side effects happens when a visual component of our application doesn't show results from a query. Automatically, when I disable "_system_trig_enabled" the applications presents a normal behavior.
I'd like to ask if anyone know what's happening. I've noticed this parameter is affecting only the applications running on Windows 95/98. Win2K applications present a normal behavior independent the parameter.
Best regards,
Marcio.You have told us almost nothing.
1. What version of Oracle? Surely you don't think they are all the same.
2. What "other" queries? We have no idea what you are doing.
It is impossible to answer your question without knowing a lot about your system.
Simply put any change that affects optimizer behaviour, by definition, affects optimizer behaviour.
How that may or may not affect any particular system requires testing ... not a ouija board or tarot cards. -
Side Effects of not installing Java/XDB
Hi,
are there any side effects known (10g/11g), when not installing one of the following options:
Spatial
Oracle interMedia
OLAP Catalog
Oracle XML Database
Oracle Text
Oracle Expression Filter
Oracle Rules Manager
Oracle Workspace Manager
Oracle Data Mining
JServer JAVA Virtual Machine
Oracle XDK
Oracle Database Java Packages
OLAP Analytic Workspace
Oracle OLAP API
As I know Java/XDB is mandantory in 11g cause of the new "firewall features" for the packages utl_tcp, utl_mail, utl_http, ...
Does anybody had interesting side effects, when not not all options/users were installed?
Thanks
MarcoIt is only has a side effect when you run applications that require an option that is not installed.
Install what you need based on your requirements.
Maybe you are looking for
-
Problem in Creation of implementaton for Badi FAGL_DERIVE_SEGMENT
Hello Abapers, I am facing problem when creating a implementaion for BADI FAGL_DERIVE_SEGMENT, FAGL_DERIVE_PSEGMENT, The error is when i am going to save any of BADI implemenations, Specify filter types.. The badi is very important to meet my require
-
Cannot deploy EntityBean from Weblogic 5.1
Hi everyone, I'm relatively new to EJB deployment, but there is an Entity bean that I was able to deploy using Weblogic 4.0 and fail to do the same with Weblogic 5.1, although I'm going through all the necessary steps, like creating xmls (ejb_jar, we
-
I can't get my Adobe flash player 11.1.0 or greater on my PC
I can't get my Adobe flash player 11.1.0 or greater on my PC
-
File Connection Error (IOException: Root is not accessible)
Hi I have to read the text file from resourse folder. I have writen the following InputConnection fc = (FileConnection)Connector.open("file:///C:/WTK22/apps/OpenFile/res/hello.txt"); but it give me some runtime error like IOException: Root is not acc
-
Is there a way to increase the volume on the mini? Are there speakers you can attach?
Is there a way to increase the volume on the mini? Are there speakers you can attach?